No-Code Operations
25 operations for cleaning, shaping, enriching, and aggregating data without writing code. Each operation includes a live preview before saving.
The Transform module provides 25 no-code operations for building data pipelines. Each operation runs against a live preview sample so you see the output before saving. Operations can be chained in any order - the output of each step becomes the input for the next.
For transformations that require custom logic beyond what the 25 operations cover, the Code Editor lets you write Python Polars scripts inside the same pipeline.
Data Quality Scoring
Every operation in Edilitics is a measurable data quality event. After each Save & Preview, the backend recalculates DQ scores for the entire table and every column. The pipeline left panel shows the delta vs the previous step on each operation card. The preview table shows per-column scores alongside every column header and an overall table score above the grid.
The goal of a transformation pipeline is to improve data quality. The scoring system makes that visible at each step.
How scores are calculated
Each column receives a score from 0 to 100 based on three dimensions:
| Dimension | Weight | What it measures |
|---|---|---|
| Completeness | 50% | Share of non-null values: 1 - (null_count / total_rows) |
| Uniqueness | 25% | Share of distinct values: distinct_count / total_rows |
| Compliance | 25% | Share of values with no type or format violations |
The table score is a weighted average of all column scores. Columns named *_id, id, *_at, or *_date are weighted 3x - key and timestamp columns have a larger impact on the overall score. Columns with a _ prefix are weighted 0.5x.
Grade scale
| Score | Grade | Label |
|---|---|---|
| 90 - 100 | A | Good |
| 75 - 89 | B | Good |
| 60 - 74 | C | Fair |
| 45 - 59 | D | Fair |
| 0 - 44 | F | Poor |
Delta display
Each completed operation card in the pipeline shows a delta badge:
▲ +Nin green - the table DQ score improved by N points vs the previous step▼ -Nin red - the table DQ score dropped by N points vs the previous step
Use this to catch operations that inadvertently degrade quality - for example a Join that introduces a large number of nulls in unmatched rows, or a Cast that produces noncompliant values.
Column score popover
Hover over any column name in the preview table to see a per-column breakdown. The popover shows:
| Field | What it shows |
|---|---|
| DQ Score | Score from 0 to 100 with grade label (Good / Fair / Poor) |
| Nulls | Count of null values in the 16,600-row sample |
| Distinct | Count of unique values in the sample |
| Noncompliant | Count of values that violate type or format rules |
| Min | Minimum value in the sample |
| Max | Maximum value in the sample |
| Avg | Mean value (numeric columns only) |
| Top Values | Most frequent values in the sample |
The colour of the score badge in the popover matches the grade scale above - green for Good, purple for Fair (C), amber for Fair (D), red for Poor. A thin coloured bar under the column name in the header shows the score at a glance without opening the popover.
Operations that typically improve DQ
| Operation | Why DQ improves |
|---|---|
| Filter (remove nulls) | Completeness score rises as null rows are removed |
| Null Values Handling (fill) | Completeness rises as nulls are replaced |
| Drop Duplicate Rows | Uniqueness score rises |
| Cast Datatypes (correct cast) | Compliance score rises |
| Find & Replace (clean values) | Compliance score rises |
| String Extract (structured fields) | Completeness rises on new columns with clean values |
Operations that may reduce DQ
| Operation | Why DQ may drop |
|---|---|
| Left / Outer Join | Unmatched rows introduce nulls - completeness drops |
| Concat (schema mismatch, Drop Columns) | Dropped columns reduce distinctness on retained columns |
| Cast Datatypes (incompatible cast) | Incompatible values become null - completeness drops |
| Flatten (List explode) | Row multiplication can reduce uniqueness on key columns |
DQ scores are always calculated on a 16,600-row random sample — both during Save & Preview and during full pipeline execution.
Combine
Bring data from a second table into the current dataset.
| Operation | What it does |
|---|---|
| Joins | Merge two tables on a matching key column. Left, Right, Inner, and Outer joins. Cross-database supported. |
| Concat | Stack rows (vertical), align columns (horizontal), or merge matrices (diagonal) from a second table. Cross-database supported. |
Filter & Sort
Reduce and order rows.
| Operation | What it does |
|---|---|
| Filter | Keep or remove rows using conditions across one or more columns. Supports AND / OR logic. |
| Sampling | Extract a subset by percentage (Simple), fixed interval (Systematic), or per-group proportion (Stratified). |
| Drop Duplicate Rows | Remove exact or partial duplicates. Choose which columns to consider and which occurrence to keep. |
| Sort / Order By | Sort rows ascending or descending on one or more columns. Null handling configurable. |
Shape Columns
Add, remove, rename, or restructure columns.
| Operation | What it does |
|---|---|
| Drop / Rename Columns | Remove columns from the dataset or rename them individually. |
| Cast Datatypes | Convert a column to a different data type. Incompatible values become null. |
| Merge Columns | Concatenate two or more string columns into one with a configurable separator. |
| Split Columns | Split a string column into multiple columns on a delimiter or fixed-width position. |
Clean Values
Fix, standardise, or handle missing values within columns.
| Operation | What it does |
|---|---|
| Null Values Handling | Fill nulls with a literal, column mean/median/mode, forward fill, backward fill, or drop rows. |
| Find & Replace | Replace exact strings or regex patterns with a new value across a column. |
| Round Off Values | Round numeric columns to 0-5 decimal places. Modifies the column in place. |
| Text Case Conversion | Convert text to uppercase, lowercase, title case, or sentence case. |
| String Extract | Extract substrings into new columns using regex capture groups. |
Derive & Classify
Add new columns based on logic or computation.
| Operation | What it does |
|---|---|
| Conditional Column | Create a new column using IF-THEN-ELSE rules with AND / OR conditions across multiple columns. |
| Column Aggregations | Add a new column containing a per-row or dataset-level aggregate (sum, mean, min, max, count). |
| Bin / Discretize | Classify numeric values into labelled bins using Equal-Width, Quantile, or Custom boundary strategies. |
Date & Time
Compute and standardise temporal data.
| Operation | What it does |
|---|---|
| Datetime Delta | Compute the difference between two datetime columns in days, hours, minutes, or seconds. |
| Datetime Aggregations | Extract date parts (year, month, week, day, hour) or truncate datetime to a unit for grouping. |
| Manage Timezones | Convert datetime columns between timezones or localise UTC timestamps. |
Aggregate & Reshape
Summarise or restructure the dataset.
| Operation | What it does |
|---|---|
| Group By | Aggregate rows by one or more group columns using sum, mean, count, min, max, and more. |
| Pivot / Unpivot | Reshape between wide (pivot) and long (unpivot/melt) formats. |
| Window Functions | Add analytical columns - rank, lag, lead, rolling aggregates, cumulative totals - partitioned by a column. |
| Flatten | Expand nested Struct columns (unnest) or List columns (explode) into flat tabular rows. |
Frequently Asked Questions
Next Steps
Code Editor
Write Python Polars scripts inside the pipeline for logic that no-code operations cannot express.
Joins
Merge two tables on a key column. Cross-database supported.
Filter
Keep or remove rows using multi-condition logic.
Group By
Aggregate rows by one or more columns.
Need help? Email support@edilitics.com with your workspace, job ID, and context. We reply within one business day.
Transform
Build data transformation pipelines with 25 no-code operations & Python Polars. Get live DQ scoring on every step and run on schedules or custom crons.
Joins
Merge two tables using a matching key column. Left, Right, Inner, and Outer joins supported. Join across any connected database. No code needed.