We know your database contains your most sensitive data, which is why Holistics is designed to work directly with your database, and not store any of your database data.
Holistics is a fully cloud-based one-stop business intelligence and extract, transform and load (ETL) software solution, which allows you to move, merge and query your marketing, operations, customer data and more, straight from your reporting database using SQL.
Question: I work for a big enterprise. How difficult is it for me to plan and budget for what I need?
We understand as we have worked with enterprises before. Contact us and we will be happy to work out a simplified pricing plan (Enterprise) based on your needs.
Yes. You can contact us here, to discuss how that can be done.
Running query on Qubole data source, I encounter error
ERROR: extra data after last expected column
My data might have tab characters.
This is a limitation of Qubole REST API, specifically the delimiters used in view results API. Please update your query to replace all tab characters with space via the function replace(string, search, replace) (doc), e.g. replace(column_with_tab_character, '\<Tab on your keyboard goes here\>', ' ')
A detailed technical explanation is as follows:
1) Let's use this query as example data. The second column data have tabs.
select * from ( values ('2017-01-01', 'Saigon Phu Quoc Resort & Spa 985', '1'), ('2017-01-02', 'No Happy Tower 1', '2') ) t (a, bug_with_tab, b)
2) The view results API has a parameter raw to switch the delimiter used in response
3) Currently, we are using the default mode of the API, i.e. raw = false. For the above query, we get this response from Qubole
"2017-01-01\tSaigon Phu Quoc Resort & Spa\t\t\t985\t1\r\n2017-01-02\tNo\t\tHappy\t\tTower\t2\r\n"
Qubole uses tab character (\t) to delimit columns and '\r\n' sequence to delimit rows. We cannot distinguish whether the tab character in the response is Qubole delimiter or actual data. This leads to incorrect number of columns after parsing the response.
4) We tried raw mode to use a different delimiter. We get this response
"2017-01-01\u0001Saigon Phu Quoc Resort & Spa\t\t\t985\u00011\n2017-01-02\u0001Happy\tNo\t\tHappy\t\tTower\u00012\n\n"
Now, Qubole uses ^A character (\u0001) to delimit columns and newline (\n) to delimit rows. The tab character is safe but the newline character is not. Newline is a common character in description, address, so using this mode would break other queries using such data.
5) As we cannot work around the limitation of Qubole on our side, we would need your help to achieve a solution. Please replace the tab character in SQL query, or clean the data directly in the database.
PostgreSQL data source, I get this error when running a report
ERROR: canceling statement due to conflict with recovery DETAIL: User query might have needed to see row versions that must be removed.
This is likely because you're querying data against a PostgreSQL replica server
with hot-standby mode. When your long-running query is running, some of the data
is being cleaned up (vacuum) and deleted, thus causing the currently running query
The quick solution is to set this on a replica server
hot_standby_feedback = on
For more information refer to this link on StackOverflow.
(MySQL) I have Chinese/Vietnamese characters, but when I query it into a report,
the characters are not displayed properly?
Make sure the default character set and collation in your database is set up to be UTF8.
We recommend that you use UTF8 as the default character set and utf8_general_ci
for your default collation.
You can change it in your
/etc/my.cnf from your db server.
# Change the MySQL database collation to UTF-8 ALTER DATABASE my_database DEFAULT COLLATE utf8_unicode_ci; # Change the MySQL database character set to UTF-8 ALTER DATABASE my_database DEFAULT CHARACTER SET utf8; # These settings should be in my.cnf skip-character-set-client-handshake collation_server=utf8_unicode_ci collation_connection=utf8_unicode_ci collation_database=utf8_unicode_ci character_set_server=utf8 character_set_database=utf8
When I try to execute a query on Presto, I encountered this error message "Catalog must be specified when session catalog is not set".
Please make sure database name (for Presto means catalog) is set properly in your data source configuration. Without a catalog set, Presto will not allow you to execute queries.