Snowflake vs Redshift

A detailed comparison

Compare Snowflake vs Redshift by the following set of categories:

Snowflake Amazon Redshift
Elasticity - separation of storage and compute Yes Redshift RA3, Spectrum only
Supported cloud infrastructure AWS, Azure, Google Cloud AWS, Azure, Google Cloud
Isolated tenancy - option for dedicated resources Multi-tenant dedicated resources (Only VPS isolated) Isolated tenant, use own VPC
Compute - node types 1-128 nodes, unknown types Any size, and types
Data - internal/external, writable storage External tables supported Internal writable, external (Spectrum)
Security - data Separate customer keys (only VPS is isolated tenant) Encryption at rest, RBAC Isolated/VPC tenant for storage and compute, Encryption at rest, RBAC
Security - network Firewall, SSL, PrivateLink whitelist/ blacklist control, isolated for VPS only Firewall, SSL, PrivateLink whitelist/blacklist control, isolated/VPC tenancy

Snowflake was one of the first decoupled storage and compute architectures, making it the first to have nearly unlimited compute scale and workload isolation, and horizontal user scalability. It runs on AWS, Azure and GCP, and while by default it is multi-tenant compute and data, it can run in a Snowflake VPC. But it only provides 1, 2, 4, … 128 node clusters with no choice of node sizes. To get the biggest nodes you need to choose the biggest cluster.

Redshift has the oldest architecture with the best options. It does not separate storage and compute. While it now has RA3 nodes which allow you to scale compute and only cache the data you need locally, all compute still operates together. You cannot separate workloads. While you can only run Redshift as an isolated workload on AWS, it has the most options on AWS, including the ability to deploy it in your own VPC.

Snowflake vs Redshift - Architecture

The biggest difference among cloud data warehouses are whether they separate storage and compute, how much they isolate data and compute, and what clouds they can run on.

Snowflake Amazon Redshift
Elasticity - (individual) query scalability 1-click cluster resize, no choice of node size Manual
Elasticity - user (concurrent query) scalability Autoscale up to 10 warehouses. Limited to 20 DML writes in queue per table Autoscale up to 10 clusters, 15 queries per cluster, 50 queued queries total, max.
Write scalability - batch Strong 1 master cluster
Write scalability - continuous Limited to 20 DML writes in queue per table. 1 minute or greater ingestion latency recommended Limited (table-level locking)
Data scalability No specified storage limit. 4XL data warehouse (128 nodes) Only with RA3: 128 RA3 nodes, 8PB of data.

Snowflake delivers strong scalability with its decoupled storage and compute architecture. But it is inefficient at scaling for certain queries that require larger nodes, because the only way to get larger nodes is with larger clusters. It is better suited for batch writes as well because it requires entire micro-partitions to be rewritten for each write. Snowflake also transfers entire micro-partitions over the network, which creates a bottleneck at scale.

Redshift is limited in scale because even with RA3, it cannot distribute different workloads across clusters. While it can scale to up to 10 clusters automatically to support query concurrency, it can only handle a maximum of 50 queued queries across all clusters by default. In addition, because it locks at the table level, it is better suited for batch ingestion and limited in its write throughput.

Snowflake vs Redshift - Scalability

There are three big differences among data warehouses and query engines that limit scalability: decoupled storage and compute, dedicated resources, and continuous ingestion.

Snowflake Amazon Redshift
Indexes None. None.
Query optimization - performance Cost-based optimization, vectorization Limited cost-based optimization
Tuning Can only choose warehouse size not node types Choice of (limited) node types
Storage format Optimized micro-partition storage (S3), separate RAM Native Redshift storage (not Spectrum)
Ingestion performance Batch-centric (micro-partition level locking, limit of 20 queued writes per table) Batch-centric (table-level locking)
Ingestion latency Batch write preferred (1+ minute interval). Requires rewrite of entire micro-partition Batch-centric (minute-level)
Partitioning Micro-partition / pruning, cluster keys Distribution, sort keys
Caching Result cache, materialized view Result cache
Semi-structured data - native JSON functions within SQL Yes Limited
Semi-structured data - native JSON storage type Limited VARIANT (single field) No
Semi-structured data - performance Slow (can require full load into RAM, full scan) Slow (flattens JSON into table)

Snowflake has of a modern storage and compute architecture that is not completely optimized for performance. Its data access is not optimized. Snowflake has no indexing for fetching the exact data it needs. It only keeps track of data ranges within each micro-partition, which range in size from 50 MB to 150 MB uncompressed, and can overlap. Whenever Snowflake does not have the data cached locally in the virtual warehouse, it has to fetch all of the micro-partitions that might have the data, which can take seconds or longer. While Snowflake does some query plan optimization, it does not show up in the performance of queries, which are 4-6000x slower than Firebolt in customer benchmarks. The three biggest reasons are inefficient data access, a lack of indexing, and less query plan optimization. However, Snowflake does provide result set caching across virtual warehouses in addition to SSD caching within each virtual warehouse. This does deliver solid performance for repetitive query workloads after the first query.

Redshift does provide a result cache for accelerating repetitive query workloads and also has more tuning options than some others. But it does not deliver much faster compute performance than other cloud data warehouses in benchmarks. While its storage access is more efficient, with smaller data block sizes being fetched over the network, it does not perform a lot of query optimization, and has no support for indexes. It also has less support for semi-structured data or low-latency ingestion at any reasonable scale.

Snowflake vs Redshift - Performance

Performance is the biggest challenge with most data warehouses today.
While decoupled storage and compute architectures improved scalability and simplified administration, for most data warehouses it introduced two bottlenecks; storage, and compute. Most modern cloud data warehouses fetch entire partitions over the network instead of just fetching the specific data needed for each query. While many invest in caching, most do not invest heavily in query optimization. Most vendors also have not improved continuous ingestion or semi-structured data analytics performance, both of which are needed for operational and customer-facing use cases.

Snowflake Amazon Redshift
Reporting Yes Yes
Dashboards Fixed view Fixed view
Ad hoc Sec-min first-time query performance Sec-min first-time query performance
Operational or customer-facing analytics (high concurrency, continuously updating / streaming data) Limited continuous writes and concurrency, slow semi- structured data performance Slower query performance. Limited continuous writes and concurrency, limited semi- structured data support
Data processing engine (Exports or publishes data) Export query results or table Unload data as Parquet
Data science/ML Spark, Arrow, Python connectors, integration with ML tools, export query results Invoke ML (SageMaker) in SQL

Snowflake has broader support for use cases beyond traditional reporting and dashboards. Its decoupled storage and compute architecture enables you to isolate different workloads to meet SLAs, and it also supports high user concurrency. But Snowflake also does not provide interactive or ad hoc query performance because of inefficient data access along with a lack of extensive indexing and query optimization. Snowflake also cannot support streaming or low latency ingestion below one minute ingestion intervals. All of these limitations exclude Snowflake from many operational use cases and most customer-facing applications that require second-level performance.

Redshift was originally designed to support traditional internal BI reporting and dashboard use cases for analysts. Without second-level performance, it cannot support any interactive and ad hoc analytics. It also has a limit of 50 queued queries by default, which limits concurrency, and a lack of support for continuous ingestion. All of these limitations mean Redshift for operational and customer-facing use cases.

Snowflake vs Redshift - Use cases

There are a host of different analytics use cases that can be supported by a data warehouse. Look at your legacy technologies and their workloads, as well as the new possible use cases, and figure out which ones you will need to support in the next few years. They include:

Reporting where relatively static reports are created by analysts against historical data, and used by executives, managers, and now increasingly by employees and customers

Dashboards created by analysts against historical or live data, and used by executives, managers, and increasingly by employees and customers via Web-based applications

Interactive and ad hoc analytics within dashboards or other tools for on-the-fly interactive analysis either by expert analysts, or increasingly by employees and customers via self-service

High performance analytics that require very large or complex queries with sub-second performance.

Big data analytics using semi-structured or unstructured data and complex queries or functionality

Operational and customer-facing analytics built by development teams that deliver historical and live data and analytics to larger groups of employees and customers

Snowflake Amazon Redshift
Administration - deployment, management Easy to deploy and resize. Strong performance visibility, limited tuning Easy to provision, harder to configure and manage
Choice - provision different cluster types on same data Choice of fixed size warehouses No
Choice - provision different number of nodes Yes Yes
Choice - provision different node types No Yes(Limited)
Pricing - compute $2-$4+ per node. Fast analytics need large cluster ($16-32+/hour) or greater. $0.25-13 per node on demand, 40% less for up front
Pricing - storage All storage.$23/40 per TB per month on demand/up front Stored data only. RA3: $24 per TB per month. S3: AWS S3 costs.
Pricing - transferred data None Spectrum: $5 per TB scanned (10MB min per query)

Snowflake as a more modern cloud data warehouse with decoupled storage and compute is easier to manage for reporting and dashboards, and delivers strong user scalability. It also runs on more than AWS. But like the others, Snowflake does not deliver sub-second performance for ad hoc, interactive analytics at any reasonable scale, or support continuous ingestion well. It is also often very expensive to scale, especially for large data sets, complex queries and semi-structured data.

Redshift, while it is arguably the most mature and feature-rich, is also the most like a traditional data warehouse in its limitations. This makes it the hardest to manage, and costly overall for traditional reporting and dashboards, and not as well suited for the newer use cases.

Snowflake vs Redshift - Cost

This is perhaps the strangest, and yet the clearest comparison; cost. There are a lot of differences in the details, but at a high level, the main differences should be clear.

Compare other data warehouses

See all data warehouse comparisons ->

Want to hear more about Firebolt? Set a meeting with our solution architects: