Relational databases have been a cornerstone in IT infrastructure for a quite long time. The fundamentals of relational databases like relationships, ACID (Atomicity, Consistency, Isolation, Durability), transactions, SQL queries, SQL procedures have been successful through the test of time.
However, we need to accept that managing relational databases is not a trivial task. You will always need a database administrator who manages the relational databases. It is a high-skill, labor-intensive task and requires the undivided attention of dedicated system and database administrators. Several aspects of a typical relational database like scaling, fault tolerance, performance, and blast radius size (the impact of a failure) has been a persistent challenge for administrators.
On the other hand, the requirements from database users have evolved over the period. Now, users want to start with a small footprint and scale it later as per their need. They want infrastructure that doesn’t limit them for scaling. They also want to isolate the component failures of the system and avoid the impact to the whole system. These are hard problems that need a paradigm shift from the old-guarded databases like Oracle.
And that gave the genesis to the AWS Aurora database.
AWS Aurora provides both features serverless and without serverless. Without a serverless approach, you need to select the DB instance size and create read replicas for increasing read throughput. If workload requirement changes, you can change the DB instance size and number of read replicas. But, this approach works only for the predictable workloads as it requires manual adjustment of the capacity. However, few workloads can be intermittent and unpredictable. There can be periods of heavy workloads that might last only a few minutes or hours. Other times, there can also be long periods of light activity, or even no activity.
With Aurora Serverless, you do not need to specify the DB instance class size. It works with a database endpoint that can be created by setting the minimum and maximum capacity. This database endpoint connects to a proxy fleet that routes the workload to a fleet of resources that are automatically scaled.
Aurora Serverless scales the resources automatically based on the capacity specified. The Proxy fleet ensures the connections are continuous and database clients don’t need to change anything at code level when new resources get added. It uses a pool of “warm” resources that are always ready to service requests so the scaling is rapid. It has storage and compute both as separate layers, that enables it to scale down to zero processing and pay only for storage.
Aurora Serverless consists of two layers:
Aurora Serverless allows autoscaling of the resources based on the capacity (minimum and maximum) defined. It can scale up to the max capacity and scale down to zero when there is no activity for a 5 minute period.
Scaling up is done based on the CPU utilization, connections and performance metrics. Once it scales up, the cooldown period for scaling down is 15 minutes. And After scaling down once, the cooldown period for scaling down again is 310 seconds. There is no cooldown period for scaling up. It can scale up whenever necessary and immediately after a scale up or down.
Auto scaling may be blocked for several reasons like long-running queries or transactions are in progress or temporary table locks are in use. Once these tasks are completed, it scales it up.
As described earlier, Aurora provides an option to scale down to zero capacity. You can choose an option to pause when there is no activity for a given amount of time. By default, it is 5 minutes. You can also disable the pausing feature.
When the DB cluster is paused, no compute activity will occur and you will be charged only for the storage. Once the connection is requested, it will resume back to the same ACU capacity that was running before the pause.
If the DB cluster is paused for more than 7 days, it will be backed up using snapshot and will be restored back when there is a connection request to it.
Relational database engine configuration is managed by associating DB instances and Aurora clusters with parameter groups. There are two types of parameter groups:
DB parameter group – It acts as a container for DB engine configuration that is applied to one or more DB instances. These configuration settings apply to properties that can vary among the DB instances within an Aurora cluster, such as the sizes for memory buffers.
DB cluster parameter group – It acts as a container for DB engine configuration that is applied to every DB instance in an Aurora DB cluster. For example, parameters such as innodb_file_per_table applies to every DB instance. Thus, parameters that affect the physical storage layout are part of the cluster parameter group.
For Aurora Serverless, DB cluster parameter groups are associated with DB instances. As DB instances are not permanently associated with Aurora Serverless clusters, it relies on DB cluster parameter groups. You can customize the configuration in the DB cluster parameter group for both instance level and Aurora cluster level. Changes are applied immediately.
To apply a change to a DB cluster parameter group, if the DB cluster is active and running, Aurora Serverless starts a seamless scale with the current capacity. If it is paused, It resumes the DB cluster first and they apply.
With Aurora Serverless, you only pay for database storage and the database capacity (ACU) and I/O your database consumes while it is active. Pricing varies based on the regions.
For example for us-east-1:
Capacity – $0.06 per ACU-hour*
Database Storage – $0.10 per GiB-month
I/O requests – $0.20 per 1 million requests
Aurora Serverless supports multi-AZ DB setup where additional AZs are used for failover.
The storage layer is distributed among multiple availability zones by default. Data is replicated six times among three availability zones having 2 in each AZ.
The compute layer consists of a single instance. In case of a failure of the instance, a new instance in another availability zone is spun up. It will use the data from the new AZ copy as it will become the primary DB instance. However, there will be some latency/delay to respond on the requests until the new instance starts up fully.
In this article, you have seen how AWS Aurora Severless works and can be very useful for many types of workloads where application is being used less frequently or has intermittent spikes in usage. It can be used for development and test environments for effective usage of resources. This is the first Serverless DB that came into the market and is able to grab the attention of all the architects and developers.