Switching production workloads to Firebolt typically involves updating configuration to point to Firebolt endpoints. If all validation is complete and data is already present, this process is straightforward.
Firebolt recommends using aggregating indexes where possible for regularly queried granularities (e.g., daily or weekly), and employing pre-joined or pre-aggregated tables to simplify and speed up dashboard queries. Ensure indexes align closely with filter criteria to optimize query performance across various granularities.
When a tenant comprises a large percentage of data (e.g., 20-25% of all data), avoid subqueries or joins that initially select large volumes of data and subsequently discard most rows. Instead, optimize queries and table structures to filter data as early and narrowly as possible, potentially using aggregated or pre-joined tables.
Firebolt supports both using views and pre-joined tables. However, if most of the query execution time is spent on joins rather than aggregations, pre-joining tables (i.e., creating wider, denormalized tables during data ingestion) is often more performant. Views are effective for reusable SQL but may become slower with complex joins at scale. Aggregating indexes, which can pre-materialize aggregation results for fast query responses, work best on single tables without cross-table joins.
Yes, primary indexes significantly impact query performance in Firebolt. Ensuring correct and optimized indexes is crucial, especially during migration. Indexes should be carefully reviewed and implemented based on query patterns and use cases.
You can add more users to your Firebolt account by either adding them through the web application under or with SQL commands. First create a login, using the email address of your invitee as the login_id. Next, associate the login to a user and assign them the appropriate permissions. Your invitee wiill automatically receive an email invitation to join your account. For more information visit our documentation.
Setting up Apache Superset with Firebolt involves: - Installing Superset locally or on a server. - Configuring the Firebolt connector with appropriate credentials and connection parameters. - Testing queries in Superset to ensure Firebolt’s indexing structure is leveraged efficiently. - Optimizing queries for dashboard performance by using Firebolt’s indexing features to minimize latency. In this case, there were some challenges with reinstalling Superset, but Firebolt’s team is available to assist with setup and troubleshooting.
Primary indexes should include the most frequently used filters, such as tenant_id and date/time columns if queries consistently filter data by tenant and date ranges. A well-chosen primary index ensures queries access only relevant data partitions, maintaining fast performance even as data volumes scale significantly.
Query performance in high-cardinality joins is significantly impacted by data cardinality, joins resulting in large intermediate row outputs, and data shuffles across nodes. Firebolt users should leverage the EXPLAIN ANALYZE functionality to identify expensive operations such as table scans, joins, and shuffles. Reducing data volume before joins through effective indexing, semi-joins, or aggregation indexes can mitigate these impacts.
Yes, semi-joins (implemented via WHERE IN clauses) can be more performant than explicit joins, as Firebolt has built-in optimizations that leverage semi-joins for better data pruning. Using semi-joins helps reduce intermediate row counts earlier in query execution, especially beneficial for high-cardinality datasets.
First, on your S3 account, confirgure the permission policy found in the help center article, https://docs.firebolt.io/Guides/loading-data/configuring-aws-role-to-access-amazon-s3.html#use-aws-iam-roles-to-access-amazon-s3. While still in your AWS Identity and Access Management (IAM) Console, start the process to upload data through the plus sign icon in the develop space. After selecting an ingestion engine, you can select 'IAM Role' as your authetnication method and you can create an IAM role in the application. Copy the trust policy here and follow the rest of the instructions in the article to apply to your AWS account. Note that you don't actually have to upload anything to create the IAM role.
In Firebolt's query profiling, CPU time refers to the actual processing time on CPU cores, while thread time represents the total wall-clock time across all threads and nodes. When thread time is significantly higher than CPU time, it typically indicates waits due to data loading from storage (like S3) or node concurrency constraints. This distinction helps diagnose bottlenecks related to IO-bound or compute-bound workloads.
For high concurrency, use multiple clusters within your engine. Clusters help handle more simultaneous queries by distributing the load. Keep in mind that cache is shared across nodes in a cluster, but not between clusters, so the right balance depends on your workload. You can also consider using auto-scaling to dynamically adjust resources based on demand.
Firebolt proatively maintains a status page at https://firebolt.statuspage.io/ where we keep you notified about any active incidents that may cause interruption to your access or services. From this page, you can also hit the 'subscribe' button to stay informed by phone, RSS, email, or Slack.
You can label a query by setting the query_label system setting before running it:
cursor.execute("set query_label = '<label>';")
cursor.execute("your_query_here")
Here’s a full example using the Firebolt Python SDK:
id = '****'
secret = '****'
connection = connect(
database="<db_name>",
account_name="<account_name>",
auth=ClientCredentials(id, secret)
)
cursor = connection.cursor()
cursor.execute("start engine <engine_name>")
cursor.execute("use engine <engine_name>")
cursor.execute("use database <database_name>")
cursor.execute("set query_label = '123';")
cursor.execute("select 1;")
print(cursor.fetchone())
connection.close()
If you created your database containing upper-case letters without quotation marks, the saved name of your database will be in all lowercase letters. Confirm the name of your database from the expolorer, information_schema.catalogs, or show catalogs. From other systems, such as an SDK, use the always use the 'official' name of your database. Within the application, you can still access your all-lowercase database name using upper case letters, without quotation marks, since that is transformed into a lower-case name behind the scenes. If you wish your object names to be case sensitive, always wrap definitions in double quotes. Please note that definitions in information_schema are constructed and will not match exactly what was executed on creation, including use of quotes.
Firebolt is available as a connector directly from within Tableau. At this time, when you select the Firebolt connector from within Tableau, we will install a Firebolt V1 integration. To connect to V2 you will need to download the new connector and place it on the appropriate directories locally or on your server. You will also need a version of JDBC compatibile with Tableau. Full instructions can be found at https://docs.firebolt.io/Guides/integrations/tableau.html#integrate-with-tableau.
At this time, COPY FROM does not support direct manipulation of S3 bucket data, however starting with 4.18 you can filter and alter data when reading using READ table-valued functions using full glob pattern capabilities (https://en.wikipedia.org/wiki/Glob_(programming)):
- Insert into an existing table using INSERT INTO + READ_PARQUET or READ_CSV
- Create a new table with CREATE TABLE AS + READ_PARQUET or READ_CSV
Yes, we can view query history prior to the last engine restart. The support team is able to retrieve the query history for the customer if they are able to provide the type of query it was (e.g., SELECT, INSERT, etc.), the approximate time it was executed, and which engine they used to execute it.