Edilitics | Data to Decisions

Run History

View run status, phase logs, DQ scores, row counts, and execution time for every Transform pipeline. Quickly diagnose pipeline failures by the exact phase.

Run History records every execution of a Transform pipeline - scheduled runs, manually triggered runs, and failed attempts. Each entry shows what happened, when, for how long, and at which phase a failure occurred.


Two phases of a transformation

Understanding run history requires understanding that a transformation has two distinct phases:

1. Build phase (preview session) The user applies operations against a 16,600-row encrypted sample in the builder. No data is written to the destination. No run is recorded in history. This phase ends when the user saves the pipeline.

2. Execution phase (live run) The saved pipeline runs as a job - either immediately (Once schedule) or on its configured schedule. This is what appears in run history. The full source table is extracted, operations are applied in order, and results are written to the destination.


Run statuses

StatusWhat it means
ScheduledThe pipeline has a future run queued based on its cron schedule.
PendingThe run has been triggered and is initialising before execution begins.
RunningThe pipeline is actively executing. Live phase logs are streaming.
SuccessAll phases completed. Data written to destination.
FailedAn error occurred during execution. The run stopped at the phase that failed.

Pipeline execution phases

Every run passes through a fixed sequence of phases. Run history shows the status and timing of each phase:

PhaseWhat it does
ProvisioningValidates billing and compute balance. Initialises the isolated execution environment.
ConnectivityConnects to source and destination databases. Verifies the source table exists.
ExtractionPulls the full source table into memory.
Data Quality Audit (source)Profiles the source data on a 16,600-row sample. Records the source DQ score.
CastingMaps source column types to destination-compatible types. Applied automatically when source and destination database types differ.
OperationsExecutes each transformation operation in order (Filter, Join, Group By, Code Editor, etc.). DQ scores are updated after each operation so you can see the quality impact of every step.
Data Quality Audit (target)Profiles the final transformed data. Records the target DQ score and health grade.
Metadata & AIRRecalculates AI Readiness (AIR) scores and updates column metadata if enabled.
LoadingWrites the final transformed dataset to the destination table.

When a run fails, the phase that failed is highlighted in the log. The error detail is visible by expanding that phase entry.


What is recorded per run

Each completed run records:

FieldWhat it shows
StatusSuccess or Failed
DurationTotal wall-clock seconds from start to finish
ExtractedRow count and column count pulled from the source
LoadedRow count and column count written to the destination
Source DQ scoreData quality score of the source data before transformation
Target DQ scoreData quality score of the transformed output
AIR scoreThe AI Readiness score calculated for the resulting table
Compute consumedSeconds of compute used by this run
Triggered atTimestamp when the run was initiated
Run typeWhether the run was scheduled, triggered manually, or a quick test

User action log

In addition to run entries, the history log records manual user actions:

ActionWhat is logged
TriggeredUser manually triggered an immediate run outside the schedule
PausedUser paused the pipeline's recurring schedule
ResumedUser resumed the pipeline after a pause
RescheduledUser changed the pipeline schedule (e.g., rescheduled to 0 8 * * 1-5)

These entries appear in the history timeline alongside run entries so you have a complete audit trail of both system executions and manual interventions.



Continuous Data Quality tracking

Unlike traditional ETL tools that only profile data at the start or end, Edilitics performs Continuous Auditing. Within the Operations phase of a run log, you can expand any individual step to see:

  • Row Impact: Exactly how many rows were added, removed, or modified by that specific operation.
  • Quality Drift: The DQ score of the dataset immediately after that operation was applied.

This allows you to pinpoint exactly which operation caused a drop in data quality (e.g., a Join that introduced unexpected nulls) or where a data cleaning step successfully improved the score.


Live run streaming

While a pipeline is running, the run history entry updates in real time. Each phase broadcasts its status as it starts and completes - you can watch the pipeline move through Provisioning → Connectivity → Extraction → Casting → Operations → Loading without refreshing.

If a phase fails, the error appears immediately in the phase log. You do not need to wait for the full run to finish to see what went wrong.


Run summary KPIs

The top of the run history panel shows cumulative stats across all runs:

  • Total runs - all executions recorded
  • Success rate - percentage of runs that completed successfully
  • Failure rate - percentage of runs that failed
  • Average duration - mean execution time across all completed runs

Notifications

EventHow you are notified
Run completed successfullyIn-app notification
Run failedEmail + in-app notification
Pipeline paused or resumedIn-app notification (all shared users)
Pipeline shared with youIn-app notification

Notifications are delivered in your account timezone.


Frequently Asked Questions


Next steps

Need help? Email support@edilitics.com with your workspace, job ID, and context. We reply within one business day.

Last updated on

On this page