Serverless application architecture is the cornerstone of cloud IT applications. AWS Lambda has made it possible for developers to concentrate on business logic and set aside the worry of managing server provisioning, OS patching, upgrades, and other infrastructure maintenance work.
However, designing serverless applications around AWS Lambda requires care, especially finding workarounds for AWS Lambda limitations. AWS Lambda limits the amount of compute and storage resources that you can use to run and store functions. AWS has deliberately put several Lambda limits that are either soft or hard to prevent misuse or abuse. It also provides guardrails so that you follow best practices of the Lambda function design.
In this article
In this article, we will take a closer look into all types of Lambda limits defined by AWS and understand their effect in different use cases. Also, we’ll examine workarounds and solutions available to overcome these limits for valid use cases.
AWS Lambda limitations come in two forms: Soft Limits and Hard Limits.
Soft limits are defined with default values. Lambda soft limits are per-region and can be increased by putting requests to the AWS support team.
In Lambda, scaling is achieved using the concurrent execution of Lambda instances. If a Lambda execution environment cannot fulfill all the requests for a given time, it spins up another instance to handle the remaining requests. However, spinning up new instances infinitely may cause high costs and can be misused, so a default AWS Lambda limit concurrency of 1,000 was put in place by AWS.
This limit is configured at the account level and shared with all the functions in the account. Having this limit protects against the unintentional use at the account level but a function inside an account may also overuse the concurrency and affect the execution of other function instances. We’ll discuss overcoming that in the best practices section.
When you deploy a function in the Lambda service, it uses storage to keep the function code with dependencies. Lambda services keep the code for every version. When you update this function with a newer version, it adds the new version code to the storage.
AWS has kept the storage limit to 75 GB so ensure you follow the best practice of cleaning up the old version code. 75 GB seems to be a very high limit but over the years, it might be exhausted with the frequent updates to the code.
There are use cases where a Lambda function needs VPC resources such as RDS -mysql. In that case, you need to configure the VPC subnet and AZs for the Lambda function. The Lambda function connects to these VPC resources through the Elastic Network Interface (ENI).
In the past, each Function instance used to need a new ENI to connect to a VPC resource so there was a chance of hitting the threshold of 250 (the default configured by AWS) very easily. But, with the latest feature of Hyperplane, VPC networking has improved and requires less ENIs for the communication between a function and VPC resources. Mostly, this threshold is rarely hit in most use cases.
Hard limits are ones that cannot be increased, even if you make a request to AWS. These Lambda limits apply to function configuration, deployments, and execution. Let’s review a few of the important limits in detail.
AWS Lambda is meant for short functions to execute for short durations so the AWS Lambda memory limit has been kept to a max of 3GB. It starts at 128 MB and can be increased by 64 MB increments.
This memory is mostly sufficient for event-driven business functions and serves the purpose. However, there are times when you need CPU-intensive or logical calculation-based workloads and may cause timeout errors due to not being able to complete the execution in time. There are several solutions available to overcome this and we will talk about those in the best practices section.
As discussed in the AWS Lambda memory limit section above, a function might time out if it doesn’t finish the execution within the allotted time. That time is 15 mins (900 seconds). This is a hard limit in which a function has to complete the execution or it will throw a timeout error.
This limit is very high for synchronous flows, as by nature they are supposed to be completed within a few seconds (3-6 seconds). For asynchronous flows, you need to be careful when designing the solution and ensure each function can complete the task within this period. If it cannot, the logic should be broken into smaller functions that complete within the limits.
AWS has kept the payload max limit to 6 MB for synchronous flows. That means you cannot pass more than 6 MB of data as events. So when designing the Lambda function, you need to ensure that consumer and downstream systems are not sending very heavy payload requests and responses, respectively. If there’s no way to avoid that, Lambda might not be the right solution for that use case.
AWS Lambda size limit is 50 MB when you upload the code directly to the Lambda service. However, if your code deployment package size is more, you have an option to upload it to S3 and download it while triggering the function invocation.
Another option is to use Lambda Layers. If you use layers, you can have a maximum of 250MB for your package. You can add up to five layers per function. However, if you are uploading so much code, there might be a real problem in your design that you should look into. A function is meant to contain short logical code. So much code may cause high cold start times and latency problems.
We reviewed some of the most common Lambda limits. Now, let’s discuss workarounds, tips, and best practices for designing Lambda functions around these limits.
There are quite a few AWS Lambda limits, but have some thought behind them. They are not meant to restrict your use of Lambda, but to protect you from unintentional behavior and things like DDoS attacks. You just need to make yourself aware of these limits and follow the best practices discussed above to get the most out of AWS Lambda.