BigQuery vs Redshift

A detailed comparison

Compare BigQuery vs Redshift by the following set of categories:

Bigquery Amazon Redshift
Elasticity - separation of storage and compute Yes Redshift RA3, Spectrum only
Supported cloud infrastructure Google Cloud only AWS only
Isolated tenancy - option for dedicated resources Multi-tenant on demand and reserved resources only Isolated tenant, use own VPC
Compute - node types No choice over (fixed) slot size Any size, and types
Data - internal/external, writable storage External tables supported - 4 concurrent queries by default Internal writable, external (Spectrum)
Security - data Separate customer keys, column- level encryption, encryption at rest, AEAD individual value encryption, RBAC Isolated/VPC tenant for storage and compute, Encryption at rest, RBAC
Security - network TLS, Firewall (Google Cloud), TLS, VPN, whitelist/ blacklist control part of GCP Firewall, SSL, PrivateLink whitelist/blacklist control, isolated/VPC tenancy

BigQuery was one of the first first decoupled storage and compute architectures, released before Snowflake. It is a unique piece of engineering and not a typical data warehouse in part because it started as an on-demand serverless query engine. While its petabit network dramatically lowers network latency for data access for any given compute step, the additional network traffic caused by transferring and caching of data in shared memory over the network after each slot finishes its job instead of in local cache seems to eliminate any major advantage in actual benchmarks. If BigQuery does start to cache locally on slots, watch out Firebolt, you might have some closer competition.

Redshift has the oldest architecture with the best options. It does not separate storage and compute. While it now has RA3 nodes which allow you to scale compute and only cache the data you need locally, all compute still operates together. You cannot separate workloads. While you can only run Redshift as an isolated workload on AWS, it has the most options on AWS, including the ability to deploy it in your own VPC.

BigQuery vs Redshift - Architecture

The biggest difference among cloud data warehouses are whether they separate storage and compute, how much they isolate data and compute, and what clouds they can run on.

Bigquery Amazon Redshift
Elasticity - (individual) query scalability Automatic allocation of each query to on demand, or reserved and flex slots Manual
Elasticity - user (concurrent query) scalability Limited to 100 concurrent users by default* Autoscale up to 10 clusters, 15 queries per cluster, 50 queued queries total, max.
Write scalability - batch 1,500 load jobs/day (~1 per minute), 100,000 per project, 15TB per job, 6 hour max time* 1 master cluster
Write scalability - continuous 1GB/sec w/ no dedup, 100MB./sec w/ dedup, 100K rows per second per table, 100K-500K per project by default* Limited (table-level locking)
Data scalability No real limit Only with RA3: 128 RA3 nodes, 8PB of data.

BigQuery on demand has several official limitations* that are needed to protect everyone else using on demand from a rogue account or query. But you can easily get around any limitations by switching to reserved slots and requesting higher limits. BigQuery is in production at very large scale with several companies. Even limits with message-based ingestion are not an issue; BigQuery ingests into memory first and later commits to storage, which is a better architecture than Snowflake, Redshift, or Athena. Nevertheless, it is still more of a shared service than Snowflake or Redshift, which means it can theoretically hit shared limits.

Redshift is limited in scale because even with RA3, it cannot distribute different workloads across clusters. While it can scale to up to 10 clusters automatically to support query concurrency, it can only handle a maximum of 50 queued queries across all clusters by default. In addition, because it locks at the table level, it is better suited for batch ingestion and limited in its write throughput.

BigQuery vs Redshift - Scalability

There are three big differences among data warehouses and query engines that limit scalability: decoupled storage and compute, dedicated resources, and continuous ingestion.

Bigquery Amazon Redshift
Indexes None. None.
Query optimization - performance Cost-based optimization Limited cost- based optimization
Tuning Can only purchase reserved or flex slots Choice of (limited) node types
Storage format Optimized (capacitor) on Colossus Native Redshift storage (not Spectrum)
Ingestion performance Writes 1 row at a time. Limits of 100K messages/sec by default* Batch-centric (table-level locking)
Ingestion latency Immediately visible during ingestion Batch-centric (minute-level)
Partitioning Partitions, pruning Distribution, sort keys
Caching Result cache (24 hours), shared memory Result cache
Semi-structured data - native JSON functions within SQL Yes Limited
Semi-structured data - native JSON storage type Can store as strings or STRUCT. But requires UDFs for compute No
Semi-structured data - performance JSON strings slow. Can store as STRUCT and use UDF (JS) Slow (flattens JSON into table)

BigQuery has not demonstrated significantly better performance or price-performance compared to Snowflake or Redshift. While remote storage access is much faster using the Jupiter petabit network, the constant writing to and fetching from shared memory over the network for each stage of the query execution (in the DAG) seems to eliminate that advantage. So does the fact that BigQuery does not use indexing. It means slots still have to process all the data stored in larger segments without filtering down to smaller (sorted) ranges. However, BigQuery does have lower latency for message-based ingestion since it does in fact ingest one row at a time and make it immediately available for querying.

Redshift does provide a result cache for accelerating repetitive query workloads and also has more tuning options than some others. But it does not deliver much faster compute performance than other cloud data warehouses in benchmarks. While its storage access is more efficient, with smaller data block sizes being fetched over the network, it does not perform a lot of query optimization, and has no support for indexes. It also has less support for semi-structured data or low-latency ingestion at any reasonable scale.

BigQuery vs Redshift - Performance

Performance is the biggest challenge with most data warehouses today.
While decoupled storage and compute architectures improved scalability and simplified administration, for most data warehouses it introduced two bottlenecks; storage, and compute. Most modern cloud data warehouses fetch entire partitions over the network instead of just fetching the specific data needed for each query. While many invest in caching, most do not invest heavily in query optimization. Most vendors also have not improved continuous ingestion or semi-structured data analytics performance, both of which are needed for operational and customer-facing use cases.

Bigquery Amazon Redshift
Reporting Yes Yes
Dashboards Fixed view Fixed view
Ad hoc Sec-min first-time query performance Sec-min first-time query performance
Operational or customer-facing analytics (high concurrency, continuously updating / streaming data) Slower query performance. Limited to 100K continuous writes/table, 100 concurrent users by default* Slower query performance. Limited continuous writes and concurrency, limited semi- structured data support
Data processing engine (Exports or publishes data) 1GB max export file size, exports to Google cloud only* Unload data as Parquet
Data science/ML BigQuery ML Invoke ML (SageMaker) in SQL

BigQuery, like Snowflake, has broader support for use cases beyond reporting and dashboards. You can isolate workloads by assigning each workload to different reserved slots. Unlike Snowflake, Redshift, or Athena, BigQuery also supports low latency streaming. But like these other three technologies. BigQuery also lacks the performance to support interactive or ad hoc queries at scale. This eliminates BigQuery from being a great option for many operational and customer-facing use cases where the users demand a few seconds of wait at worst, which translates to sub-second query times for the data warehouse.

Redshift was originally designed to support traditional internal BI reporting and dashboard use cases for analysts. Without second-level performance, it cannot support any interactive and ad hoc analytics. It also has a limit of 50 queued queries by default, which limits concurrency, and a lack of support for continuous ingestion. All of these limitations mean Redshift for operational and customer-facing use cases.

BigQuery vs Redshift - Use cases

There are a host of different analytics use cases that can be supported by a data warehouse. Look at your legacy technologies and their workloads, as well as the new possible use cases, and figure out which ones you will need to support in the next few years. They include:

Reporting where relatively static reports are created by analysts against historical data, and used by executives, managers, and now increasingly by employees and customers

Dashboards created by analysts against historical or live data, and used by executives, managers, and increasingly by employees and customers via Web-based applications

Interactive and ad hoc analytics within dashboards or other tools for on-the-fly interactive analysis either by expert analysts, or increasingly by employees and customers via self-service

High performance analytics that require very large or complex queries with sub-second performance.

Big data analytics using semi-structured or unstructured data and complex queries or functionality

Operational and customer-facing analytics built by development teams that deliver historical and live data and analytics to larger groups of employees and customers

Bigquery Amazon Redshift
Administration - deployment, management No administration or tuning Easy to provision, harder to configure and manage
Choice - provision different cluster types on same data Yes No
Choice - provision different number of nodes Up to 2000 flex (on demand) slots, Purchase reserved or flex slots 100 at a time with no limits Yes
Choice - provision different node types No Yes(Limited)
Pricing - compute On demand - $5/TB data processed, Flex slots $4 for 100 slots per hour, $1700/month per 100 slots $0.25-13 per node on demand, 40% less for up front
Pricing - storage $20/TB active storage, $10/TB inactive Stored data only. RA3: $24 per TB per month. S3: AWS S3 costs
Pricing - transferred data Batch is free. Streaming ingest $0.01 per 200MB ($50/TB), streaming reads $1.1/TB Spectrum: $5 per TB scanned (10MB min per query)

BigQuery has three different pricing models: on demand, reserved, and flex pricing. If you need a data warehouse, you probably should not be using on demand unless you do not need to scan a lot of data for each query. You should be using reserved slots with flex slots to reduce the costs of workload variations. When you do, your costs will not be far off from Snowflake or Redshift for regular data warehouse workloads. BigQuery does give you the option to also support infrequent analytics, more inline with Athena. In other words, it is the best of both more traditional worlds. Nevertheless, BigQuery’s price-performance is inline with Snowflake and Redshift, which is up to 10x more expensive than Firebolt.

Redshift, while it is arguably the most mature and feature-rich, is also the most like a traditional data warehouse in its limitations. This makes it the hardest to manage, and costly overall for traditional reporting and dashboards, and not as well suited for the newer use cases.

BigQuery vs Redshift - Cost

This is perhaps the strangest, and yet the clearest comparison; cost. There are a lot of differences in the details, but at a high level, the main differences should be clear.

Compare other data warehouses

See all data warehouse comparisons ->

Talk to a Firebolt solution architect