Transform
Build data transformation pipelines with 25 no-code operations & Python Polars. Get live DQ scoring on every step and run on schedules or custom crons.
The Transform module builds data transformation pipelines on top of connected integrations. Pick a source table, apply operations, write results to a destination, and schedule execution. Every step shows a live preview and a DQ score before anything is saved or run.
Pipelines support three modes - no-code operations, Python Polars scripts, and hybrid combinations of both - in a single ordered chain.
Transformation modes
| Mode | What it is | When to use it |
|---|---|---|
| No-Code Operations | 25 point-and-click operations: filter, join, group by, pivot, cast, clean, derive, and more | Most transformations - no code required |
| Code Editor | Write Python Polars scripts directly inside a pipeline step | Logic that no-code operations cannot express |
| Hybrid Pipelines | No-code and code steps in any order in the same pipeline | Complex pipelines that mix standard and custom logic |
How a transformation is structured
A transformation connects a source integration and destination integration through an ordered chain of operations:
Source integration → Base table → [Operation 1 → Operation 2 → ... → Operation N] → Destination integration → Destination table- Operations execute in order. The output of each step is the input for the next.
- Every operation has a Save & Preview step - runs against a 16,600-row sample, updates the data grid, and recalculates DQ scores before anything is committed.
- At least one operation is required to save a transformation.
- Source and destination can be the same integration or different ones.
- When source and destination are different database types, Edilitics automatically maps source column types to the types supported by the destination. This happens at preview time - the mapped schema is visible in the data grid from the first Save & Preview onwards.
Continuous Data Quality auditing
Unlike traditional ETL tools that profile data only at the beginning or end of a run, Edilitics performs Continuous Auditing. Every Save & Preview recalculates DQ scores across the full table and every column based on a 16,600-row sample of the preview data.
This allows you to track the quality impact of every transformation step as you build:
- Quality Delta Badges: Each operation card shows a
▲ +Nor▼ -Nbadge indicating the immediate impact on the dataset's health. - Per-Column Breakdown: Click any column header in the preview table to see a detailed health profile: score, null count, distinct count, noncompliant count, and value ranges.
- Execution Persistence: These step-level scores are persisted in the Run History logs, allowing you to audit the quality impact of every live run.
See No-Code Operations - Data Quality Scoring for the full scoring model.
How to create a transformation
Pick a source integration
Open the Transform module and click New Transformation. Select the source integration - the connected database or file source the pipeline will read from.
Pick a base table
Select the table from the source to build the pipeline on. Edilitics loads a 16,600-row preview sample. The preview is encrypted, temporary, and automatically purged after 30 minutes of inactivity or when you save or exit.
Pick a destination integration
Select the destination integration - where the transformed data will be written. Can be the same as the source or a different connected integration.
When source and destination are different database types - for example PostgreSQL to BigQuery - Edilitics automatically translates source column types to the closest supported types in the destination. The mapped types are reflected in the preview grid immediately after the first Save & Preview. No manual column mapping required.
Add operations
Add at least one operation to the pipeline. Operations execute in the order they are added. For each operation:
- Configure the operation parameters
- Click Save & Preview to run it against the preview sample
- The data grid updates and a DQ delta badge appears on the operation card, showing the immediate impact on data quality.
Mix no-code operations and Code Editor steps in any order.
Set destination table name, title, description, and schedule
Before saving, fill in:
- Destination table name - the table that will be created or overwritten in the destination integration
- Title - name for the transformation, shown in the dashboard and logs
- Description - short note for team context
- Schedule - when the pipeline runs:
| Option | What it does |
|---|---|
| Once | Runs immediately on save. No recurring schedule. |
| Daily | Runs once per day at a specific time (HH:mm) of your choice. |
| Custom cron | Any valid 5-part cron expression. |
Save and run
Click Save. The pipeline is deployed. If scheduled as Once, it triggers immediately. For Daily or cron schedules, the first run executes at the next scheduled time.
Scheduling
Transformations support three schedule options: Once, Daily, and Custom cron. Schedule availability depends on your plan - recurring schedules (Daily, cron) require a plan that supports auto-update intervals for the Transform module.
A transformation can be paused and resumed at any time from the dashboard. Pausing stops future scheduled runs without deleting the pipeline.
Sharing and access
The owner of a transformation (the user who created it, or a workspace admin) can share it with other users in the same organisation by email. The Share option is not available to users on generic email domains.
| Action | Owner | Shared user |
|---|---|---|
| View operation sequence | ✓ | ✓ |
| View run history and logs | ✓ | ✓ |
| Trigger an immediate run | ✓ | ✓ |
| Pause and resume | ✓ | ✓ |
| Duplicate to own workspace | ✓ | - |
| Edit operation configuration | ✓ | - |
| Delete | ✓ | - |
| Share or revoke access | ✓ | - |
Workspace admins have the same permissions as the owner across all transformations in the organisation.
Notifications and Alerts
Edilitics keeps you and your team informed of pipeline status through event-driven alerts. Notifications are delivered in your account's configured timezone.
| Event | Notification Channel |
|---|---|
| Pipeline Success | In-app alert |
| Pipeline Failure | Email + in-app alert |
| Schedule Pause/Resume | In-app alert (Owner + All shared users) |
| Pipeline Shared | In-app alert |
For detailed logs and failure diagnostics, see Run History.
Frequently Asked Questions
Next steps
No-Code Operations
25 point-and-click operations for filtering, joining, aggregating, cleaning, and reshaping data.
Code Editor
Write Python Polars scripts inside a pipeline step for logic no-code operations cannot express.
Hybrid Pipelines
Combine no-code operations and code steps in any order in a single pipeline.
Scheduling
Configure once, daily, or custom cron schedules. Pause, resume, and reschedule after creation.
Run History
Track run status, phase-level logs, DQ scores, and execution metrics for every pipeline run.
Compute and Billing
Understand how compute seconds are metered, how balances are validated before runs, and what happens when compute runs out.
Need help? Email support@edilitics.com with your workspace, job ID, and context. We reply within one business day.
SQLite
Connect SQLite to Edilitics as a data source by uploading a .db or .sqlite file. Supports DQ profiling, AI column insights, and AskEdi sessions.
No-Code Operations
25 operations for cleaning, shaping, enriching, and aggregating data without writing code. Each operation includes a live preview before saving.