Compute and Billing
Transform pipelines consume compute seconds per phase. Learn how execution is metered, how balances are validated, and what happens when compute runs low.
Transform pipelines run in isolated ephemeral compute that spins up fresh for each execution and terminates immediately on completion. You are charged only for active execution time - there is no idle or standby cost.
What is a compute second?
Compute is measured in seconds of wall-clock execution time. Every second your pipeline is actively running - extracting data, applying operations, loading results - consumes one compute second from your balance.
There is no per-operation surcharge. A Join and a Filter cost the same per second. The total cost of a run is determined entirely by how long it takes.
How compute is metered
Compute consumption is tracked phase by phase throughout each run:
| Phase | Compute consumed |
|---|---|
| Provisioning | Spinning up a fresh isolated environment for this run |
| Connectivity | Time to connect to source and destination |
| Extraction | Time to pull the full source table |
| Casting | Time to map and convert column types |
| Operations | Time per operation, charged individually |
| Data Quality Audits | Time for source and target DQ profiling |
| Loading | Time to write results to the destination |
Each phase is timed separately. The total compute consumed for a run is the sum of all phase durations.
Balance validation before every run
Before a run is authorised - whether scheduled or manually triggered - Edilitics checks your compute balance against the projected cost of the run.
How the projection works:
- New pipelines (no prior successful runs): the projected cost defaults to a conservative baseline.
- Established pipelines: the projected cost is based on the historical average of recent successful run durations.
If your balance is below the projected cost, the run is blocked before it starts. You will see a clear error in the Run History with a link to top up your compute balance.
What happens when compute runs out mid-run
If compute is exhausted while a run is in progress, the pipeline terminates immediately and the run is marked Failed. Edilitics does not pause or checkpoint mid-run - the run stops at whichever phase was executing when the balance was exhausted.
The failure is recorded in run history automatically, including the phase at which the run stopped.
Resuming after low compute
When resuming a paused pipeline, Edilitics checks that your balance covers at least one projected run before allowing resume. If the balance is insufficient, resume is blocked with a prompt to top up.
No data is stored in Edilitics infrastructure
Each run executes transiently. Source data is pulled into isolated memory, operations are applied, and the output is written to your destination database. Edilitics retains only the pipeline configuration - the operation sequence, column mappings, and schedule. No customer records are stored in Edilitics infrastructure at any point.
Viewing your compute balance
Your current compute balance and usage history are available in the Compute dashboard, accessible from the main account settings.
The run history for each pipeline shows the compute seconds consumed per run, so you can track which pipelines are the heaviest consumers.
Plan compute limits
Each plan includes a monthly base compute allocation, a machine tier that determines processing speed, and a maximum duration per individual pipeline run.
| Plan | Machine | Base compute/month | Max run duration |
|---|---|---|---|
| Launch | 2 vCPU, 8 GB RAM | 3,600 s (60 min) | 60 min |
| Scale | 4 vCPU, 16 GB RAM | 7,200 s (120 min) | 120 min |
| Pinnacle | 8 vCPU, 32 GB RAM | 10,800 s (180 min) | 240 min |
If a pipeline run exceeds the maximum run duration for your plan, it terminates immediately and is marked Failed. This is the same behaviour as compute exhaustion - no data is written for that run.
The machine tier also determines row throughput: a Pinnacle run processes significantly more rows per compute second than a Launch run on the same data, because it has more CPU cores and memory available.
Addon compute packs
Each plan includes a monthly base compute allocation. When that is exhausted, you can top up with addon compute packs. Packs are one-time purchases and do not expire at the end of the billing period.
Five pack sizes are available, with volume discounts on larger packs:
| Pack | Compute hours | Discount |
|---|---|---|
| Boost | 10 h | - |
| Plus | 20 h | 5% off |
| Pro | 50 h | 12% off |
| Max | 100 h | 18% off |
| Ultra | 250 h | 25% off |
Row capacity estimates below assume an average row size of approximately 2 KB and 20-40 columns per table - consistent with the column limits per plan (Launch: 50 columns, Scale: 100 columns, Pinnacle: 200 columns). Actual capacity depends on your data's density, the number of operations in your pipeline, and whether operations expand or reduce the column count.
Launch plan - 2 vCPU, 8 GB RAM, up to 50 columns per saved table
| Pack | Hours | Row capacity (approx.) | Price (USD) |
|---|---|---|---|
| Boost | 10 h | ~3M rows | $18.99 |
| Plus | 20 h | ~6M rows | $37.99 |
| Pro | 50 h | ~15M rows | $87.99 |
| Max | 100 h | ~30M rows | $163.99 |
| Ultra | 250 h | ~75M rows | $372.99 |
Scale plan - 4 vCPU, 16 GB RAM, up to 100 columns per saved table
| Pack | Hours | Row capacity (approx.) | Price (USD) |
|---|---|---|---|
| Boost | 10 h | ~6M rows | $37.99 |
| Plus | 20 h | ~12M rows | $75.99 |
| Pro | 50 h | ~30M rows | $175.99 |
| Max | 100 h | ~60M rows | $327.99 |
| Ultra | 250 h | ~150M rows | $747.99 |
Pinnacle plan - 8 vCPU, 32 GB RAM, up to 200 columns per saved table
| Pack | Hours | Row capacity (approx.) | Price (USD) |
|---|---|---|---|
| Boost | 10 h | ~12M rows | $75.99 |
| Plus | 20 h | ~24M rows | $151.99 |
| Pro | 50 h | ~60M rows | $351.99 |
| Max | 100 h | ~120M rows | $655.99 |
| Ultra | 250 h | ~300M rows | $1,497.99 |
Row estimates reflect total data volume processed across all pipeline phases. Chained no-code operations are highly optimized via the Polars engine; however, Code Editor steps using custom Python logic (UDFs) may run slower, reducing total row throughput for the same compute hours.
Frequently Asked Questions
Next steps
Run History
See compute consumed per run alongside status, phase logs, and DQ scores.
Scheduling
Configure schedules and understand how compute is validated before each scheduled run.
Transform overview
Back to the Transform module overview.
Need help? Email support@edilitics.com with your workspace, job ID, and context. We reply within one business day.
Last updated on
Run History
View run status, phase logs, DQ scores, row counts, and execution time for every Transform pipeline. Quickly diagnose pipeline failures by the exact phase.
Data Visualization with Edilitics
Build governed dashboards using schema-validated datasets. Suggest charts, format visuals, and scale analytics with consistent logic and trusted metrics.