Run History
View run status, phase logs, DQ scores, row counts, and execution time for every Transform pipeline. Quickly diagnose pipeline failures by the exact phase.
Run History records every execution of a Transform pipeline - scheduled runs, manually triggered runs, and failed attempts. Each entry shows what happened, when, for how long, and at which phase a failure occurred.
Two phases of a transformation
Understanding run history requires understanding that a transformation has two distinct phases:
1. Build phase (preview session) The user applies operations against a 16,600-row encrypted sample in the builder. No data is written to the destination. No run is recorded in history. This phase ends when the user saves the pipeline.
2. Execution phase (live run) The saved pipeline runs as a job - either immediately (Once schedule) or on its configured schedule. This is what appears in run history. The full source table is extracted, operations are applied in order, and results are written to the destination.
Run statuses
| Status | What it means |
|---|---|
| Scheduled | The pipeline has a future run queued based on its cron schedule. |
| Pending | The run has been triggered and is initialising before execution begins. |
| Running | The pipeline is actively executing. Live phase logs are streaming. |
| Success | All phases completed. Data written to destination. |
| Failed | An error occurred during execution. The run stopped at the phase that failed. |
Pipeline execution phases
Every run passes through a fixed sequence of phases. Run history shows the status and timing of each phase:
| Phase | What it does |
|---|---|
| Provisioning | Validates billing and compute balance. Initialises the isolated execution environment. |
| Connectivity | Connects to source and destination databases. Verifies the source table exists. |
| Extraction | Pulls the full source table into memory. |
| Data Quality Audit (source) | Profiles the source data on a 16,600-row sample. Records the source DQ score. |
| Casting | Maps source column types to destination-compatible types. Applied automatically when source and destination database types differ. |
| Operations | Executes each transformation operation in order (Filter, Join, Group By, Code Editor, etc.). DQ scores are updated after each operation so you can see the quality impact of every step. |
| Data Quality Audit (target) | Profiles the final transformed data. Records the target DQ score and health grade. |
| Metadata & AIR | Recalculates AI Readiness (AIR) scores and updates column metadata if enabled. |
| Loading | Writes the final transformed dataset to the destination table. |
When a run fails, the phase that failed is highlighted in the log. The error detail is visible by expanding that phase entry.
What is recorded per run
Each completed run records:
| Field | What it shows |
|---|---|
| Status | Success or Failed |
| Duration | Total wall-clock seconds from start to finish |
| Extracted | Row count and column count pulled from the source |
| Loaded | Row count and column count written to the destination |
| Source DQ score | Data quality score of the source data before transformation |
| Target DQ score | Data quality score of the transformed output |
| AIR score | The AI Readiness score calculated for the resulting table |
| Compute consumed | Seconds of compute used by this run |
| Triggered at | Timestamp when the run was initiated |
| Run type | Whether the run was scheduled, triggered manually, or a quick test |
User action log
In addition to run entries, the history log records manual user actions:
| Action | What is logged |
|---|---|
| Triggered | User manually triggered an immediate run outside the schedule |
| Paused | User paused the pipeline's recurring schedule |
| Resumed | User resumed the pipeline after a pause |
| Rescheduled | User changed the pipeline schedule (e.g., rescheduled to 0 8 * * 1-5) |
These entries appear in the history timeline alongside run entries so you have a complete audit trail of both system executions and manual interventions.
Continuous Data Quality tracking
Unlike traditional ETL tools that only profile data at the start or end, Edilitics performs Continuous Auditing. Within the Operations phase of a run log, you can expand any individual step to see:
- Row Impact: Exactly how many rows were added, removed, or modified by that specific operation.
- Quality Drift: The DQ score of the dataset immediately after that operation was applied.
This allows you to pinpoint exactly which operation caused a drop in data quality (e.g., a Join that introduced unexpected nulls) or where a data cleaning step successfully improved the score.
Live run streaming
While a pipeline is running, the run history entry updates in real time. Each phase broadcasts its status as it starts and completes - you can watch the pipeline move through Provisioning → Connectivity → Extraction → Casting → Operations → Loading without refreshing.
If a phase fails, the error appears immediately in the phase log. You do not need to wait for the full run to finish to see what went wrong.
Run summary KPIs
The top of the run history panel shows cumulative stats across all runs:
- Total runs - all executions recorded
- Success rate - percentage of runs that completed successfully
- Failure rate - percentage of runs that failed
- Average duration - mean execution time across all completed runs
Notifications
| Event | How you are notified |
|---|---|
| Run completed successfully | In-app notification |
| Run failed | Email + in-app notification |
| Pipeline paused or resumed | In-app notification (all shared users) |
| Pipeline shared with you | In-app notification |
Notifications are delivered in your account timezone.
Frequently Asked Questions
Next steps
Scheduling
Configure once, daily, or custom cron schedules. Pause, resume, and reschedule after creation.
Compute and Billing
Understand how compute seconds are metered per run and what happens when the balance is low.
Transform overview
Back to the Transform module overview.
Need help? Email support@edilitics.com with your workspace, job ID, and context. We reply within one business day.
Last updated on
Scheduling
Schedule Edilitics Transform pipelines to run once, daily, or on custom cron expressions. Pause, resume, or edit schedules anytime with timezone-awareness.
Compute & Billing
Transform pipelines consume compute seconds per phase. Learn how execution is metered, how balances are validated, and what happens when compute runs low.