• Guide Content

AWS Lambda Memory Size: Basics, Tutorial, and Best Practices

What Is AWS Lambda? 

AWS Lambda is a serverless compute service that lets users run code without provisioning or managing servers. With Lambda, developers can execute code in response to triggers such as changes in data or system state, making it suitable for event-driven architecture. 

By handling infrastructure management tasks like server and OS maintenance, Lambda allows developers to focus on writing code that delivers core functionality. This makes it an efficient option for microservices, data processing, and real-time file processing.

Lambda supports various programming languages, including Node.js, Python, Java, and Go. Users pay only for the compute time they consume—there are no charges when the code is not running. The service automatically scales to meet the number of incoming requests, and it can be integrated with other AWS services. 

This is part of a series of articles about Lambda performance.

Understanding Lambda Memory and Computing Power 

Each Lambda function has a specified memory allocation, ranging from 128 MB to 10,240 MB. The memory setting directly influences the available CPU power, with higher memory allocations providing proportionally more CPU power.

Increasing the memory allocation can significantly boost the execution speed of a Lambda function, especially for compute-intensive tasks. However, setting the memory too high can lead to unnecessary costs, as Lambda pricing is based on the allocated memory and execution time.

To optimize a function’s performance, it’s essential to find the right balance. This requires understanding the function’s requirements and behavior. Monitoring performance metrics such as execution duration, memory usage, and CPU utilization can reveal insights to make informed adjustments to memory settings.

Related content: Read our guide to AWS Lambda limits.

Tutorial: Configure Lambda Function Memory 

Here’s a walkthrough of how to configure the memory size for a Lambda function. The instructions are adapted from the official Lambda documentation.

Determine the Best Memory Setting for a Function

The default memory setting in Lambda is 128 MB, suitable for simple tasks such as routing events. However, more complex functions, especially those involving large libraries or integrations with services like Amazon S3 or Amazon EFS, benefit from increased memory. 

To identify the optimal memory setting for a Lambda function, consider using the open source Lambda Power Tuning tool. This tool runs the function with various memory configurations and measures performance metrics, helping determine the most efficient setting. It executes the function in the user’s AWS account, ensuring that performance data reflects real-world conditions. 

Additionally, integrating this tool into a CI/CD pipeline allows for continuous performance optimization as new functions are deployed.

Configure Function Memory

Memory settings for a Lambda function can be configured through the AWS Management Console. Follow these steps to update a function’s memory allocation:

  1. Open the AWS Lambda console and navigate to the Functions page.
  2. Select the function to update.
  3. Click on the Configuration tab, then choose General configuration.
  4. Click Edit to modify the general configuration.
  5. Adjust the Memory setting to a value between 128 MB and 10,240 MB, depending on the function’s requirements.
  6. Click Save to apply the changes.

Source: AWS

Best Practices for Optimizing Memory Allocation in AWS Lambda 

Here are some of the ways that developers can ensure functions have the optimal amount of memory allocated in AWS Lambda.

Use Memory Optimization Techniques

Implementing memory optimization libraries and techniques can improve the performance of Lambda functions:

  • Minimize dependencies: Reduce the number of libraries and dependencies the function uses. This reduces the package size and improves initialization time.
  • Optimize code: Write efficient code that uses memory judiciously. Avoid memory leaks and use efficient data structures.
  • Compression: Use data compression techniques where applicable to reduce memory usage.

Regularly Monitor and Adjust Memory Allocation 

Continuous monitoring and adjustment of the Lambda function’s memory allocation are crucial for maintaining optimal performance:

  • Use CloudWatch: Set up CloudWatch metrics to monitor memory usage, execution time, and function invocations. Pay attention to memory usage patterns and adjust allocations accordingly.
  • Automate adjustments: Integrate monitoring and adjustments into the CI/CD pipeline. This allows for automatic tuning of memory settings based on real-time performance data.
  • Regular reviews: Periodically review the performance of functions. As the application evolves, its memory requirements may change, requiring adjustments to the memory settings.

Implement Effective Lambda Development and Design

Adhering to best practices in Lambda development and design can lead to more efficient memory usage:

  • Modular functions: Break down large functions into smaller, more modular ones. This not only improves readability and maintainability but also optimizes resource usage.
  • Cold start optimization: Use provisioned concurrency for functions that require low-latency responses, reducing the impact of cold starts on memory usage.
  • Efficient error handling: Implement adequate error handling to prevent memory leaks and ensure the functions run smoothly.

Related content: Read our guide to AWS Lambda concurrency.

AWS Lambda Observability, Debugging, and Performance Made Easy with Lumigo

Lumigo is a serverless monitoring platform that lets developers effortlessly find Lambda cold starts, understand their impact, and fix them.

Lumigo can help you:

  • Solve cold starts easily obtain cold start-related metrics for your Lambda functions, including cold start %, average cold duration, and enabled provisioned concurrency. Generate real-time alerts on cold starts, so you’ll know instantly when a function is under-provisioned and can adjust provisioned concurrency.
  • Find and fix issues in seconds with visual debugging – Lumigo builds a virtual stack trace of all services participating in the transaction. Everything is displayed in a visual map that can be searched and filtered.
  • Automatic distributed tracing – with one click and no manual code changes, Lumigo visualizes your entire environment, including your Lambdas, other AWS services, and every API call and external SaaS service.
  • Identify and remove performance bottlenecks – see the end-to-end execution duration of each service, and which services run sequentially and in parallel. Lumigo automatically identifies your worst latency offenders, including AWS Lambda cold starts.
  • Serverless-specific smart alerts – using machine learning, Lumigo’s predictive analytics identifies and alerts on issues before they impact application performance or costs, including alerts about AWS Lambda cold starts.

Get a free account with Lumigo resolve Lambda issues in seconds.