Send me as a pdf

BigQuery vs Athena (2024)

A detailed comparison

Compare BigQuery vs Athena by the following set of categories:

BigQuery was one of the first decoupled storage and compute architectures. It is a unique piece of engineering and not a typical data warehouse in part because it started as an on-demand serverless query engine. It runs in multi-tenancy with shared resources, allocated as “slots” which represent a virtual CPU that executes SQL. BigQuery determines how many slots a query requires, without the ability of the user to control it. BigQuery can be priced on a $/TB scanned basis or through slot reservations. A slot in BigQuery is logically equivalent to 0.5 vCPU and 0.5GB of RAM. There are multiple models to allocate slots in BigQuery.

Athena is serverless and built on a decoupled storage and compute architecture that queries data directly in S3, without the need to ingest/copy the data. It runs in multi-tenancy with shared resources. Users do not have control over the compute resources Athena chooses to allocate per query from the shared resource pool. For folks requiring additional or dedicated resources, they can reserve dedicated processing capacity in the form of Data Processing Units (DPU), with each DPU providing 4 vCPU and 16 GB RAM. RPU allocation ranges from 24 - 1000 per region.

BigQuery vs Athena - Architecture

The biggest difference among cloud data warehouses are whether they separate storage and compute, how much they isolate data and compute, and what clouds they can run on.

BigQuery scales very well to large data volumes, and automatically assigns more compute resources when needed behind the scenes, in the form of “slots”. BigQuery works either in an “on-demand pricing model”, where slot assignment is completely in the hands of BigQuery and the state of the shared resource pool, or in “flat-rate pricing model” where slots are reserved in advanced. With reserved slots there is more control over compute resources, thus making scaling more predictable. Concurrency is limited to 100 users by default.

Athena is a shared multi-tenant resource, with no guarantees on the amount or availability of the resources allocated for your queries. From a data volume perspective, it can scale to large volumes, but large data volumes can suffer from very long run times and frequent time outs. Query concurrency is maxed at 20. If scalability is a top priority, Athena is probably not the best choice.

BigQuery vs Athena - Scalability

There are three big differences among data warehouses and query engines that limit scalability: decoupled storage and compute, dedicated resources, and continuous ingestion.

BigQuery lines up in benchmarks in the same ballpark as other cloud data warehouses but does come in consistently last in most queries. Beyond implementing according to best practices, there is little you can do to accelerate BigQuery performance, as it determines the amount of resources (slots) the query needs for you. BigQuery can be used together with “BigQuery BI Engine” for lower latency analytics. However, BI Engine is limited in terms of scale because it runs in memory. Its maximum capacity is 100GB.

Athena (and Presto) are designed to query data where it is, sacrificing storage-compute optimizations. This makes it very convenient for easy and immediate querying but at the expense of performance. This typically puts Athena behind cloud data warehouses in terms of performance. But Athena still does relatively well in performance benchmarks, especially when external storage is managed by experts. While it supports partitions, there is no support for indexing, and together with the fact that resources are pooled from a shared multi-tenant service, low-latency and consistent performance are not Athena’s sweet spot. A cloud data warehouse be more performant better than Athena in most cases.

BigQuery vs Athena - Performance

Performance is the biggest challenge with most data warehouses today.
While decoupled storage and compute architectures improved scalability and simplified administration, for most data warehouses it introduced two bottlenecks; storage, and compute. Most modern cloud data warehouses fetch entire partitions over the network instead of just fetching the specific data needed for each query. While many invest in caching, most do not invest heavily in query optimization. Most vendors also have not improved continuous ingestion or semi-structured data analytics performance, both of which are needed for operational and customer-facing use cases.

BigQuery is a mature general-purpose data warehouse, which lends itself well to internal BI & reporting. The fact that it’s serverless in nature and tightly integrated with GCP, makes it very convenient for Ad-Hoc analytics and ML use cases on GCP. On the other hand, because BigQuery makes resource allocation decisions for you, it is not always the best fit for operational use cases and Data Apps where performance needs to be consistent and predictable.

Athena is a great choice for Ad-Hoc analytics. You can keep the data where it is, and start querying without worrying about hardware or pretty much anything else, given that Athena is serverless and takes care of everything behind the scenes. However, it is not a great fit when you need consistent and fast query performance, and/or high concurrency. This is why it is typically not the best choice for operational and customer-facing applications. It can be also easily and flexibly used for batch processing, which is often leveraged for ML use cases.

BigQuery vs Athena - Use cases

There are a host of different analytics use cases that can be supported by a data warehouse. Look at your legacy technologies and their workloads, as well as the new possible use cases, and figure out which ones you will need to support in the next few years.

Compare other data warehouses

See all data warehouse comparisons ->

Talk to a Firebolt solution architect