AskEdi | Secure, Context‑Aware AI Analytics at Your Fingertips
AskEdi is Edilitics’ ad‑hoc and everyday analytics module — built for the moments when you need instant answers and the daily or weekly check‑ins that keep your business on track, all without writing a single query. Whether you’re in a board meeting, on a sales call, or making a high‑stakes operational decision, AskEdi delivers governed, context‑aware insights in seconds.
With a governance‑first design (the LLM never touches your source data), interactive visualizations, and per‑chat model selection, AskEdi bridges the gap between raw data and actionable intelligence — while keeping you in full control of what’s shared with AI models.
Why AskEdi Matters in Modern Data‑Driven Decisions
Traditional BI tools excel at predefined dashboards and scheduled reports — but struggle when you need a one‑off answer to an urgent question. Most teams face:
-
Slow turnaround for ad‑hoc requests Waiting for an analyst or writing SQL yourself can derail fast‑moving discussions.
-
Security concerns with AI tools Generic assistants lack data governance, risking exposure of PII and sensitive schema details.
-
Limited context in AI analysis Without schema and semantic context, models produce vague or incorrect results.
-
Lack of transparency Black‑box AI responses make it difficult to verify logic or trust the output.
How AskEdi Works
AskEdi enables secure, auditable, context‑aware conversations with your data:
-
Privacy & Context Modes Three simple modes control what metadata and data are shared with the LLM. No per‑field switches — modes cover everything.
-
Per‑chat Model Selection Pick your LLM provider per chat — Anthropic, DeepMind, or OpenAI — restricted to what your plan allows.
-
Query Transparency The Analysis view shows the AI‑generated query, the anonymized/processed query Edilitics executed, and the final query — so you can validate results independently.
-
Interactive Visualizations Automatic charts with tooltips, zoom, data view, and export (PNG/CSV) for easy sharing.
-
Governed Audit Logging Every chat, query, and mode choice is logged for user/admin review (see Audit Logs).
-
Performance Telemetry Each response shows LLM Provider Latency and AskEdi Latency (Edilitics’ post‑processing pipeline: safety check → query post‑processing → database execution → result verification → response).
-
Web + Mobile Web Access Access AskEdi seamlessly from desktop web or mobile web.
INFO
Sample Data is fixed at 5 random rows (governance‑safe sampling) and only includes the columns you selected for the chat. Mode selection controls all sharing parameters.
Privacy & Context Modes
Mode | What’s Shared with the LLM | Best For |
---|---|---|
Minimal | Anonymized table/column names; real data types; column descriptions only. No sample rows. No query outputs. | Maximum privacy; schema‑level reasoning without data exposure. |
Balanced (Default) | Real (or anonymized) table/column names as configured by governance; column descriptions; sample data: 5 random rows; no full query outputs. | Everyday analytics and ad‑hoc analysis with more context while limiting exposure. |
Full Context | Everything in Balanced plus real query output data (aggregates/tables) for deeper analysis. | When accuracy from actual results is mission‑critical and approved. |
Important
AskEdi provides AI‑assisted analytics. Outcomes depend on the quality of your data and metadata — garbage in, garbage out still applies. Use clean data, accurate column descriptions, and validate critical decisions via the Analysis view and your internal review process.
Creating a New AskEdi Chat
AskEdi’s flow gives you full control over privacy, context, and scope before you ask a question.
1) Select a Database or File
- Live sources via Integrations: MySQL, PostgreSQL, MongoDB, BigQuery, Redshift, Snowflake.
- Files: CSV, Excel, JSON, Parquet, Avro, Feather, Pickle, PDFs with tabular data.
- For clarity and governance, one table per chat is supported — this prevents joins, enforces row‑level policies, avoids cross‑source leakage, and ensures deterministic, auditable queries.
2) Choose a Table
- From your integration or file, pick the specific table to chat with.
- Preview table metadata to confirm you’ve selected the correct dataset.
3) Select Columns
- Choose the columns to include in the chat.
- AI Column Insights are mandatory for any table used in AskEdi. If a table lacks descriptions, configure them in Integrate before starting the chat.
- Plan caps: Launch up to 50 · Scale up to 100 · Pinnacle up to 200 columns per chat.
- Exclude any PII or sensitive fields.
- Column descriptions (from AI Column Insight) help the model understand semantics without exposing raw values.
4) Pick Privacy & Context Mode
- Choose Minimal, Balanced, or Full Context. This replaces all legacy toggles (names, data types, descriptions, sample rows, query results).
- Balanced shares 5 random sample rows; Full Context additionally shares actual query outputs.
5) Choose LLM Provider (Per Chat)
- Select Anthropic, DeepMind, or OpenAI, based on what your plan enables.
6) Start Chat
-
Ask in plain language. AskEdi will generate a query, execute it on Edilitics’ secure layer, and return results. The LLM never directly accesses your data source.
-
Real‑time execution & data residency: Queries run in real time against your database; your data remains securely within your source systems — never staged or extracted by Edilitics.
Analysis View
At any point, open Analysis to:
- Inspect the AI‑generated query.
- Compare with the anonymized/processed query that Edilitics executed (when schema was anonymized).
- View the final executed query (as applicable).
- Copy queries for independent verification.
Latency Telemetry
Each answer includes timing details:
- LLM Provider Latency — Time the chosen provider took to generate the answer.
- AskEdi Latency — Edilitics’ total post‑processing time: safety check → query post‑processing → database execution → result verification → response assembly.
This transparency helps teams tune mode selection, column scope, and provider choice for speed vs. depth.
Security & Access Assurance
LLM Isolation & Query Execution
- The LLM never directly accesses your data source.
- Queries run through Edilitics’ secure execution layer for AskEdi with read‑only enforcement at runtime.
- The selected mode governs exactly what metadata, sample rows, or results are shared for reasoning.
Read‑Only Query Validation (LLM‑Generated)
- AskEdi automatically validates model‑generated SQL to be read‑only before execution.
- Non‑read operations (DDL/DML like
INSERT
,UPDATE
,DELETE
,DROP
, etc.) are blocked and surfaced with an explanation. - For AskEdi, mutation attempts are blocked at the execution layer — even if your underlying connection allows writes for other modules (e.g., Replicate, Transform).
- Runtime guardrails (timeouts, sensible row limits) protect source systems.
Real‑Time Execution & Data Residency
- All queries execute live against your source; no staging, no extraction into Edilitics‑managed stores.
- Depending on mode, only approved metadata, sample rows, or results are shared with the LLM for reasoning.
Data Encryption & Storage
- All context, AI‑generated queries, and resulting data are encrypted in transit and at rest.
- Per‑tenant encryption keys ensure complete isolation.
- Secure storage ensures historical queries and results are fully auditable for compliance and governance.
Credentials & Source Security
- Managed in Integrate; credentials decrypt only at runtime and are never exposed to the LLM.
Audit Logging
- Every action is logged with timestamp, user ID, and parameters.
- User‑ and admin‑level audit logs provide full traceability for governance reviews and compliance reporting.
Governance by Design
- Single table per chat.
- Mandatory column selection (with plan‑based limits).
- Mode‑based context sharing that replaces individual toggles.
Sharing, Resuming & Export
- Share Chats (View‑Only): Invite teammates to view chats (no continuation by viewers). Viewers can download the full‑chat PDF.
- Resume Later: Re‑open any chat and click Continue to pick up where you left off — owner only.
- Export to PDF: Download a professional report of the entire conversation, visuals, and findings. Available to both chat owners and shared viewers.
Permissions at a Glance
Capability | Owner | Shared Viewer |
---|---|---|
View chat | ✓ | ✓ |
Download PDF | ✓ | ✓ |
Continue chat | ✓ | - |
FAQ: Security & Governance
-
Why only one table per chat? To protect governance and auditability — prevents joins, cross‑source leakage, and unpredictable query logic.
-
Does the LLM see my data? Only what the selected mode allows (e.g., 5 sample rows in Balanced); the LLM never touches your source directly.
-
Are queries guaranteed read‑only? Yes — AskEdi validates SQL as read‑only and enforces this at runtime, even if your underlying connection supports write operations for other modules.
-
Is my data copied or staged? No — queries run in real time against your DB/warehouse; data is never staged or extracted by Edilitics.
-
Who can continue a chat? Only the owner. Shared viewers have view and PDF download rights only.
Enterprise Support & Technical Assistance