AWS Aurora Serverless

  • Topics

Introduction to the AWS Aurora Database

Relational databases have been a cornerstone of IT infrastructure for a long time. The fundamentals of relational databases — ACID (Atomicity, Consistency, Isolation, Durability), transactions, SQL queries — have withstood the test of time. However, the requirements of users  — and especially those working with AWS serverless — have evolved. Users now want to start with a small footprint and scale it later as needed. They also want to isolate the component failures of the system and minimize the impact on the entire system. These are hard problems that need a paradigm shift from old-school relational databases such as Oracle.

In addition, managing relational databases is not a trivial task. You need a database administrator who manages the relational databases. It is a high-skill, labor-intensive task and requires the undivided attention of dedicated systems and database administrators. Several aspects of a typical relational database like scaling, fault tolerance, performance, and blast radius size (the impact of a failure) have been a persistent challenge for administrators.

And that’s where the AWS Aurora database comes in.

How Aurora Serverless Works

AWS Aurora comes as both serverless and non-serverless. Without a serverless approach, you need to select the DB instance size and create read replicas for increasing read throughput. If workload requirement changes, you can change the DB instance size and number of read replicas. But, this approach works only for the predictable workloads as it requires manual adjustment of the capacity. However, few workloads can be intermittent and unpredictable. There can be periods of heavy workloads that might last only a few minutes or hours. Other times, there can also be long periods of light activity, or even no activity.

With Aurora Serverless, you do not need to specify the DB instance class size. It works with a database endpoint that can be created by setting the minimum and maximum capacity. This database endpoint connects to a proxy fleet that routes the workload to a fleet of resources that are automatically scaled.

Aurora Serverless scales the resources automatically based on the capacity specified. The Proxy fleet ensures the connections are continuous and database clients don’t need to change anything at code level when new resources get added. It uses a pool of “warm” resources that are always ready to service requests so the scaling is rapid. It has storage and compute both as separate layers, that enables it to scale down to zero processing and pay only for storage.

Aurora Serverless Architecture

Image Source

Aurora Serverless consists of two layers:

  • Storage Layer – It replicates the data among multiple availability zones by default. The storage capacity automatically scales from 10 GiB to 64 TiB. It is the same as the standard Aurora DB cluster.
  • Compute Layer – this layer is separated from Storage. Here, you configure the ACU (Aurora Capacity Unit). It is a combination of processing and memory capacity. It scales vertically from 2 ACU to 64 ACU . This gives freedom to pause the whole compute layer. It handles the scalability for the unpredictable scenarios.

Autoscaling for Aurora Serverless

Aurora Serverless allows autoscaling of the resources based on the capacity (minimum and maximum) defined. It can scale up to the max capacity and scale down to zero when there is no activity for a 5 minute period.

Image Source

Scaling up is done based on the CPU utilization, connections and performance metrics. Once it scales up, the cooldown period for scaling down is 15 minutes. And After scaling down once, the cooldown period for scaling down again is 310 seconds. There is no cooldown period for scaling up. It can scale up whenever necessary and immediately after a scale up or down.

Auto scaling may be blocked for several reasons like long-running queries or transactions are in progress or temporary table locks are in use. Once these tasks are completed, it scales it up.

Automatic Pause and Resume

As described earlier, Aurora provides an option to scale down to zero capacity. You can choose an option to pause when there is no activity for a given amount of time. By default, it is 5 minutes. You can also disable the pausing feature.

When the DB cluster is paused, no compute activity will occur and you will be charged only for the storage. Once the connection is requested, it will resume back to the same ACU capacity that was running before the pause.

Image Source

If the DB cluster is paused for more than 7 days, it will be backed up using snapshot and will be restored back when there is a connection request to it.

 Aurora Serverless and Parameter Groups

Relational database engine configuration is managed by associating DB instances and Aurora clusters with parameter groups. There are two types of parameter groups:

DB parameter group –  It acts as a container for DB engine configuration that is applied to one or more DB instances. These configuration settings apply to properties that can vary among the DB instances within an Aurora cluster, such as the sizes for memory buffers.

DB cluster parameter group – It acts as a container for DB engine configuration that is applied to every DB instance in an Aurora DB cluster. For example, parameters such as innodb_file_per_table applies to every DB instance. Thus, parameters that affect the physical storage layout are part of the cluster parameter group.

For Aurora Serverless, DB cluster parameter groups are associated with DB instances. As DB instances are not permanently associated with Aurora Serverless clusters, it relies on DB cluster parameter groups. You can customize the configuration in the DB cluster parameter group for both instance level and Aurora cluster level. Changes are applied immediately.

To apply a change to a DB cluster parameter group, if the DB cluster is active and running, Aurora Serverless starts a seamless scale with the current capacity. If it is paused, It resumes the DB cluster first and they apply.

Aurora Serverless Pricing

With Aurora Serverless, you only pay for database storage and the database capacity (ACU) and I/O your database consumes while it is active. Pricing varies based on the regions.

For example for us-east-1:

Capacity – $0.06 per ACU-hour*

Database Storage – $0.10 per GiB-month

I/O requests – $0.20 per 1 million requests

Multi-AZ DB

Aurora Serverless supports multi-AZ DB setup where additional AZs are used for failover.

The storage layer is distributed among multiple availability zones by default. Data is replicated six times among three availability zones having 2 in each AZ.

The compute layer consists of a single instance. In case of a failure of the instance, a new instance in another availability zone is spun up. It will use the data from the new AZ copy as it will become the primary DB instance. However, there will be some latency/delay to respond on the requests until the new instance starts up fully.

Summary

In this article, you have seen how AWS Aurora Severless works and can be very useful for many types of workloads where application is being used less frequently or has intermittent spikes in usage. It can be used for development and test environments for effective usage of resources. This is the first Serverless DB that came into the market and is able to grab the attention of all the architects and developers.