Product Philosophy
This is the foundation that guides our vision, roadmap, and product design. As you evaluate Holistics, it may be useful to understand the philosophy and principles we subscribe to.
These principles aren't decorative. They're the reason Holistics is built around an expressive semantic layer and analytics-as-code infrastructure, and the reason Holistics AI is structurally different from text-to-SQL tools. Each principle below ladders up to that.
Death to silos
Barriers between teams and tools are the culprit of many issues in modern data analytics processes. One of the key focuses of Holistics products and methodologies is to break down barriers between teams, data processing, and storage tools, facilitating the flow of data inside an organization. We believe this flow is essential for an effective data-driven organization.
By committing to a codification-focused and API-first approach to product design, we ensure that any part of Holistics tooling and process is not locked and can be easily integrated with a data team's existing processes and tools.
Declarative, not imperative
There are two schools of thought when it comes to data processing: declarative processing and imperative processing. The declarative approach emphasizes the end result and lets the tools handle the steps to get to that end result. On the other hand, the imperative approach lets end users specifying the specific steps to get to a certain end result.
We strongly prefer the former as we believe it allows users to only concern with what they want, instead of how they want. How data processing is done depends on the technology of the day. On the other hand, what the end result represents is timeless. Once abstracted away from technical specific detail, it will allow data processing to be more robust and durable.
Short feedback loop
Long feedback loops allow defects in data processing to propagate and cause untold damages in the end results, eroding the trust of data teams. At Holistics, we believe that all data processes should have short feedback loops, which allow for early detection of potential issues and preventing defects to propagate downstream. Consequently, it increases the robustness of the process, and ultimately the trust of the data team.
Reliability through codification
Tracking changes and dependencies for data processing is essential for ensuring the data process to be robust and reproducible. Encoding the whole process in a textual language allows changes and dependencies to be stored in a powerful version control system that tracks what changed and who changed.
As a result, data teams have the peace of mind that business-critical data processes are safe and reproducible. Any change that causes failure can be safely reverted to a previous, known good state easily. Explicit dependencies allow for easy debugging when failures occur.
The reliability of the whole process improves as a consequence.
Automation through codification
Encoding the whole process in a textual language also work as the building blocks for further automation to be built on top. Automation allows for more reliable, repeatable processes and saves manual effort involved in data processing.
Experimentation through codification
Experimentation is the key to allow an organization to get more out of data analytics. Encoding data processing in a textual language stored in a powerful version control system empowers data team members to create forks or branches of the data processing logic. This allows them to do their own experiments without affecting the rest of the team. Less friction to experiment means more experiments that generate values for the organization.
Centralized definition, decentralized access
There has always been a tension between ensuring data processes are robust and reliable, and empowering all members of an organization to have access to data. Centralizing definitions with codification and allowing all members to build data products on top of that base is a method that results in the best-of-both-world outcome for the organization. At Holistics, we strive to achieve this ideal in all of our products.
Thoughts-augumenting tools
At Holistics, we believe tools should not only allow their users to achieve a certain outcome, but also should elevate their users' cognitive abilities, assisting them on the path to mastery of their own careers. We strive to do this by providing new perspectives and new powerful ways to approach existing workflows and processes.
AI amplifies the foundation
AI is only as trustworthy as the foundation it reasons from. Most AI analytics tools translate natural language directly to SQL against raw schema, and the output reflects that: plausible-looking answers that analysts still have to verify, with follow-up questions that fall apart on the second click.
The principles above (declarative definitions, codified business logic, short feedback loops, centralized definition with decentralized access) exist because they make the foundation strong enough that AI can reason from it correctly. When metrics are composable code objects (not SQL strings), AI can compose them. When business logic is versioned and reviewed (not mutable), AI's foundation stays consistent. When the semantic layer is expressive enough to handle real follow-up questions, AI doesn't fall off a cliff after the first answer.
This is why we believe the right way to build trustworthy AI analytics is to first build the foundation that makes it possible. Holistics invests in the semantic layer and analytics-as-code infrastructure precisely because AI amplifies whatever foundation it stands on, and a weak foundation produces unreliable AI no matter how capable the model is.