Why Holistics
What problem Holistics solves, why other BI tools struggle with the same problem, and what we do differently.
The promise that keeps breaking
Every BI tool now ships an AI assistant. Some translate questions directly to SQL against the raw warehouse and guess at table joins, date columns, and what business metrics actually mean. The more sophisticated ones ground their AI in a semantic layer. Both run into the same wall on real analytical questions like period comparisons, cohort retention, and ratios across grains: either there's no semantic layer at all, or the semantic layer underneath can only carry first-order queries. AI inherits that limit. Analysts end up verifying every output. The queue doesn't shrink. It changes shape.
Self-service analytics has the same shape of failure. The first question works ("revenue by region last quarter"). The second one breaks ("revenue by region last quarter, compared to the same period last year, for customers active in both periods"). The user falls back to filing a ticket, or pulling the data into a spreadsheet, or, worst case, making a decision on a number that's subtly wrong.
Both failures share one root cause.
The semantic ceiling
Most BI tools have a semantic layer: the place where metrics, dimensions, and relationships get defined so that everyone queries the same logic. The problem is that most semantic layers can only express first-order queries: pick a metric, slice it by a dimension, filter it, group it.
The moment a real analytical question shows up, the semantic layer can't carry it. Period-over-period comparisons, cohort retention, ratios across grains, nested aggregations: the logic leaks out into derived tables, spreadsheets, dashboard formulas, one-off SQL. We call this the semantic ceiling, and the leakage is semantic leakage.
Here's a concrete shape. Ask a typical BI tool: "for each country, give me median revenue per buyer." That's a sub-aggregation: sum revenue per buyer first, then take the median per country. SQL handles it with one CTE. Most semantic layers can't, because measures can't reference other measures. The standard workaround is a pre-built derived table that locks the inner dimension, so asking the same metric by marketing source requires another derived table. Multiply by every dimension and pattern, and the workaround grows linearly with question variety. See Nested aggregation: Looker vs Holistics for the full walkthrough with code, diagrams, and a video.
This is why AI assistants on top of those tools fail in the same way. AI inherits the limits of the semantic layer it reasons from. If the semantic layer can't express "revenue per active customer in the comparable period last year," neither can the AI. It will either refuse, or guess, or quietly produce a confident-looking number that's wrong.
What most semantic layers can't do
The semantic ceiling shows up in three structural gaps that most BI tools share:
-
Not programmable. Most semantic layers are defined in YAML configs. YAML is schemaless (no type checking until runtime), ambiguous (different parsers handle values differently), and offers no abstractions. Reuse requires Jinja templating, which breaks at runtime and is impossible to debug. You can't build modules, extend definitions, or use conditionals the way a real programming language allows.
-
Not composable. Most semantic layers treat metrics as SQL strings tagged with metadata. A metric can't reference another metric. Period-over-period comparisons, cohort retention, and ratios across grains require pre-built derived tables (which lock dimensions) or leak into dashboard formulas and spreadsheets. The moment a user asks a variation, the semantic layer can't carry it.
-
Not truly as-code. Most BI tools are UI-first with Git export as an afterthought. Changes happen in the UI and get synced to Git after the fact. Pull requests review the artifact, not the intent. Environments (dev/staging/prod) either don't exist or require manual copying. The "as-code" label gets applied without the durability or governance that engineering teams expect from version control.
These gaps compound: without programmability, you can't define reusable abstractions. Without composability, you can't express variations. Without true as-code, you can't govern what escapes. The semantic layer becomes a table of contents instead of a system of record.
What Holistics does differently
Holistics raises the semantic ceiling with a uniquely expressive semantic layer, and keeps what's underneath governed with analytics-as-code infrastructure. The semantic layer is the differentiator; analytics-as-code is what keeps it durable.
The differentiator: an expressive semantic layer
The semantic layer itself is written in AML, a typed modeling language with first-class abstractions, not YAML configs. Layered on top, AQL is a composable query language for metrics. The two operate at different levels of the stack: AML makes the semantic layer programmable; AQL makes the query/metric layer composable. Both are typed and IDE-supported, both designed because YAML configs and SQL strings can't carry real analytics logic.
AML, typed modeling language. Most BI tools that call themselves "analytics-as-code" use YAML, but YAML is not a programming language. It's schemaless (no type checking until runtime), ambiguous (enabled: yes parses as a boolean in Python and a string in Node), and offers no abstractions, so reuse degrades into Jinja templating that breaks at runtime. AML (Analytics Modeling Language) is a typed language purpose-built for analytics. Models, dimensions, measures, datasets, and relationships are first-class language constructs, not generic key-value structures. Modules, extends, partials, constants, and functions keep analytics code DRY at scale. The full IDE experience (autocomplete, inline docs, go-to-definition, instant compile-time errors) comes with the language. See AML vs YAML for the structural argument.
AQL, composable metric language. In most BI tools, a metric is a SQL string tagged with metadata. AQL (Analytics Query Language) treats metrics as first-class composable objects instead. A metric like revenue isn't just a SUM(...) expression; it's an object that can be combined with time logic, level-of-detail modifiers, period comparisons, and other metrics, all without falling back to SQL. Cohort retention, period-over-period comparison, percent-of-total, running totals, nested aggregations: these stay inside the semantic layer instead of leaking out. AQL compiles deterministically to SQL, and the compiled output is inspectable, so engineers can verify exactly what runs against the warehouse. See AQL vs SQL for the structural argument.
The consequence for AI: when Holistics AI receives a natural-language question, it generates AQL against an AML-defined semantic layer. AQL is database-agnostic, composable, and aware of your existing metric definitions; AML is the typed, governed substrate those definitions live in. The AI doesn't reinvent revenue from raw tables. It reuses the definitions you already wrote, in languages built to be reused. See Why Holistics AI is reliable for the full mechanism.
The durability backbone: analytics-as-code
Every definition in Holistics is code, in a Git repository. Models, metrics, datasets, dashboards, relationships, permissions: all of it.
That means business logic gets:
- History. Every change has an author, a timestamp, and a diff.
- Review. Changes go through pull requests. Wrong answers don't get merged silently.
- Branches. Experiment in isolation; promote when ready.
- Environments. Develop in dev, test in staging, ship to prod through a real promotion workflow.
This is what makes the foundation durable rather than mutable. The semantic layer doesn't drift. Stakes go up, and the substrate AI reasons from gets stronger over time, not weaker.
See the Analytics-as-Code overview for how this plays out in practice.
What these foundations enable
Every BI tool advertises these outcomes. The foundations above are why ours hold up under real questions.
| Outcome | Why it actually works in Holistics |
|---|---|
| Trusted AI analytics | AI reasons from the same composable, governed definitions humans use, not raw schema |
| Governed self-service | The semantic layer carries cohort, period, and ratio questions natively. Variations stay inside the layer instead of leaking into spreadsheets |
| Embedded analytics | The same governed layer powers customer-facing AI and dashboards. One definition, many surfaces |
| Developer-friendly BI | Inspectable compiled SQL, IDE tooling, type checking, CI/CD: engineering practices applied to BI |
How this compares
| Typical BI tool | Holistics | |
|---|---|---|
| Semantic layer definition | YAML configs (schemaless, Jinja workarounds) | AML (typed language with first-class abstractions) |
| Metric definition | SQL strings that can't combine | AQL (composable objects that can reference each other) |
| Programmability | Limited to YAML + Jinja templates | Full language: modules, extends, conditionals, IDE support |
| Composability | Metrics can't reference metrics | Metrics compose: period comparisons, level-of-detail, ratios stay inside |
| Follow-up questions | Leak into derived tables, spreadsheets | Stay inside the semantic layer |
| AI mechanism | Natural language → SQL against raw schema | Natural language → AQL against semantic layer → SQL |
| Governance model | UI-first; Git export as afterthought | Code-first; Git-native; PRs, branches, environments |
| Compiled SQL | Often opaque | Always inspectable |
| Embedding | Separate product surface or limited | Same governed layer powers internal + customer-facing |
Where to go from here
- How Holistics works: the end-to-end architecture and workflow
- Holistics AI: how the AI is structurally different, and what it's good at
- AQL Overview: the language behind the expressive semantic layer
- Analytics-as-Code: how the foundation stays durable
- AML vs YAML and AQL vs SQL: why we built our own languages
- Coming from Looker?: the structural comparison and migration guide