• Guide Content

AWS Lambda Runtimes: The Basics & Creating Your Custom Runtime

What Are AWS Lambda Runtimes? 

An AWS Lambda runtime is the language execution environment provided by AWS for running Lambda functions. Runtimes support different programming languages, allowing developers to write their functions in the language of their choice. Each runtime is responsible for executing the function code in response to events and managing interactions with infrastructure.

AWS continuously updates and adds support for new runtimes, catering to new development trends and language popularity. This allows developers to take advantage of the latest language features and security updates, improving the performance and security of Lambda functions.

Which Runtimes are Supported by AWS Lambda?

Here are some of the main runtimes available in AWS Lambda.

Node.js 20

Node.js 20 offers updated JavaScript features, improving asynchronous operations and performance. It’s based on the V8 engine, ensuring efficient execution of Lambda functions by optimizing the runtime environment for event-driven, non-blocking I/O models, which are common in the Node.js ecosystem.

This runtime supports the latest ECMAScript standards, providing new syntax and capabilities that simplify coding processes. Developers can use  modern JavaScript features to write more concise and maintainable code, optimizing their Lambda function deployments.

Python 3.12

The Python 3.12 runtime in AWS Lambda introduces optimizations and new language features that enhance function execution. These include performance improvements through more efficient bytecode and support for newer Python syntax, enabling cleaner code development.

Lambda’s Python 3.12 support ensures automatic management of the execution environment, from handling dependencies to maintaining security patches. This simplifies the task for developers, focusing them on writing function logic without worrying about underlying infrastructure.

Java 21

Java 21 in AWS Lambda supports the latest JDK features and improves cold start performance for Java-based Lambda functions. It allows Lambda to more efficiently manage Java threads and memory, resulting in faster startup times and lower latencies during function invocations.

This runtime also supports advanced Java features such as pattern matching and records, making Java code more concise and easier to read. These improvements help developers optimize their Lambda functions for better performance and reduced resource usage.

.NET 8

The .NET 8 runtime offers improved performance, security, and language features, suitable for developers working in the .NET ecosystem. This runtime provides improved cold start performance by optimizing the initialization process, reducing the latency for function invocations.

The .NET 8 runtime supports the latest C# language features, including improvements to pattern matching, records, and asynchronous programming. These features enable developers to write more efficient and readable code. The runtime also integrates with AWS tools and libraries, such as the AWS SDK for .NET, enabling interaction with other AWS services.

Ruby 3.3

The Ruby 3.3 runtime in AWS Lambda offers significant performance improvements and new language capabilities. These include faster execution speeds and reduced memory usage, contributing to lower costs and improved function responsiveness.

Ruby 3.3 supports the latest syntax and standard library updates, allowing developers to access modern Ruby features. This includes improved pattern matching, right-hand assignment, and refinements in the concurrency model with Ractors, which enable better parallel execution. AWS Lambda manages the Ruby environment, keeping dependencies up-to-date and applying security patches.

Tips from the expert:

Based on my experience, here are a few ways you can work more effectively with Lambda runtimes:

  • Use custom runtimes for specialized needs: When your application requires a language or environment not supported by default Lambda runtimes, consider building a custom runtime. This allows you to use any programming language or environment your application needs.
  • Use provisioned concurrency for critical functions: For functions that require consistent low latency, especially Java and .NET, use provisioned concurrency to keep instances warm, reducing cold start issues.
  • Optimize memory allocation based on function profiling: Use AWS Lambda Power Tuning to benchmark and optimize memory settings for your functions. Higher memory settings can improve CPU allocation and reduce execution time, balancing cost and performance.
  • Enable AWS X-Ray for detailed tracing: Use AWS X-Ray to trace and debug your Lambda functions. It provides deep insights into function executions, helping to identify performance bottlenecks and issues within your application.

Secure your Lambda environment: Use AWS IAM roles to grant your Lambda functions the minimum required permissions. Enable VPC access for functions needing secure access to resources within a VPC, and ensure all environment variables are encrypted using AWS KMS.

Related content: Read our guide to lambda architecture

What Is the Runtime Deprecation Policy in Lambda? 

AWS Lambda’s runtime deprecation policy ensures that deprecated runtimes are phased out in a structured manner, giving developers enough time to migrate their functions. When AWS decides to deprecate a runtime, they announce the deprecation schedule well in advance, providing at least a year’s notice before the runtime is fully retired.

During this period, AWS continues to support the deprecated runtime, including security patches and critical updates. However, no new features or non-critical updates are provided. Developers are encouraged to migrate their functions to supported runtimes to benefit from the latest features, performance improvements, and security updates.

Once the depreciation period ends, the runtime is no longer available for new functions, and existing functions must be updated to a supported runtime to continue operating. AWS provides migration guides and tools to assist developers in transitioning their Lambda functions to newer runtimes.

Choosing and Managing Runtimes in AWS Lambda 

Here are some of the main considerations when choosing an AWS Lambda runtime.

Runtimes and Performance

Different runtimes have different characteristics in terms of startup latency, execution speed, and memory usage. Runtimes like Node.js and Python are known for their quick startup times and are often preferred for short-lived functions, such as event processing and API backends. Java and .NET runtimes might have higher cold start latency due to their initialization complexity, but they offer high performance for long-running tasks.

To optimize performance, developers should benchmark their Lambda functions across different runtimes and select the one that best suits their use case. Factors such as the nature of the workload, execution duration, and resource consumption should be considered. AWS provides tools like AWS Lambda Power Tuning, which helps in evaluating the performance and cost trade-offs for different memory configurations and runtimes.

Multiple Runtimes in Single Applications

In complex applications, it’s common to use multiple runtimes within the same project. AWS Lambda supports this approach, allowing different functions within the same application to use different runtimes. This enables developers to apply the strengths of each runtime for different tasks.

For example, an application might use Node.js for handling asynchronous API requests due to its non-blocking I/O model, while using Python for data processing tasks that benefit from its extensive libraries. Managing multiple runtimes requires careful orchestration of dependencies and configurations.

Managing AWS SDKs in Lambda functions

The AWS SDKs enable Lambda functions to interact with other AWS services. Managing these SDKs is important for ensuring efficient and secure operations. Each runtime includes a version of the AWS SDK for its respective language, but developers can include specific versions of the SDK in their deployment packages if they require features not available in the default version.

When managing SDKs, it’s essential to monitor for updates and security patches. Tools like AWS CodeArtifact or dependency management systems (e.g., npm for Node.js, pip for Python) can simplify the process of keeping SDKs up-to-date. Using environment variables and IAM roles within Lambda functions ensures secure access to AWS services.

Tutorial: Building a Custom Runtime for AWS Lambda

This tutorial is adapted from the official Lambda documentation. Custom runtimes in AWS Lambda must complete specific initialization and processing tasks to function correctly. A custom runtime is responsible for setting up the function environment, reading the handler name, and managing invocation events through the Lambda runtime API. It passes event data to the function handler and posts the response back to Lambda.

Initialization 

Initialization tasks in AWS Lambda only run once per instance of the function to prepare the environment for handling invocations. To initialize the runtime:

  1. View the environment variables to gather information about the function and its environment. Key variables include:
  • _HANDLER: Indicates the handler’s location, typically in the file.method format .
  • LAMBDA_TASK_ROOT: The directory containing the function code.
  • AWS_LAMBDA_RUNTIME_API: The host and port for the runtime API.
  1. To initialize the function, load the handler file and execute any global or static code. Functions should create resources such as SDK clients and database connections once and reuse them for multiple invocations.
  2. If an error occurs during initialization, the runtime calls the “initialization error” API and exits. Initialization time contributes to billed execution time and timeout.

Task Processing 

During operation, the custom runtime uses the Lambda runtime interface to handle incoming events and manage errors. After initialization, the runtime processes events in a loop, which requires implementing the following steps:

  1. Call the “next invocation” API to retrieve the next event, which includes event data and request details.
  2. Retrieve the X-Ray tracing header from the API response and set the _X_AMZN_TRACE_ID environment variable to connect trace data between services.
  3. Build an object containing context information from environment variables and headers in the API response.
  4. Pass the event and context object to the Lambda function handler.
  5. Use the “invocation response” API to post the handler’s response.
  6. If an error occurs, use the “invocation error” API to report it.
  7. For cleanup, release unused resources and perform additional tasks before retrieving the next event.

Setting Up the Entry Point

The entry point for a custom runtime is an executable file called bootstrap. This file can either be the runtime itself or invoke another file that initializes the runtime. If the bootstrap file is missing or not executable, Lambda returns a Runtime.InvalidEntrypoint error. Here is an example of a bootstrap file that uses a bundled Node.js version to run a JavaScript runtime:

#!/bin/sh
cd $LAMBDA_TASK_ROOT
./node-v11.1.0-linux-x64/bin/node runtime.js

Enabling Response Streaming

To implement a response streaming function in a custom runtime, use the response and error endpoints. These endpoints support chunked transfer encoding, allowing the Lambda function to stream partial responses to the client. Key requirements include:

  • Setting the Lambda-Runtime-Function-Response-Mode header to the streaming.
  • Using the Transfer-Encoding header set to chunked.
  • Writing the response according to the HTTP/1.1 chunked transfer encoding specification.
  • Closing the connection after the response is successfully written.

To report midstream errors, the runtime can attach HTTP trailing headers with error information, ensuring the client receives error metadata. The Trailer header must be set at the beginning of the HTTP request to attach these headers.

AWS Lambda Observability, Debugging, and Performance Made Easy with Lumigo

Lumigo is a serverless monitoring platform that lets developers effortlessly find Lambda cold starts, understand their impact, and fix them.

Lumigo can help you:

  • Solve cold starts easily obtain cold start-related metrics for your Lambda functions, including cold start %, average cold duration, and enabled provisioned concurrency. Generate real-time alerts on cold starts, so you’ll know instantly when a function is under-provisioned and can adjust provisioned concurrency.
  • Find and fix issues in seconds with visual debugging – Lumigo builds a virtual stack trace of all services participating in the transaction. Everything is displayed in a visual map that can be searched and filtered.
  • Automatic distributed tracing – with one click and no manual code changes, Lumigo visualizes your entire environment, including your Lambdas, other AWS services, and every API call and external SaaS service.
  • Identify and remove performance bottlenecks – see the end-to-end execution duration of each service, and which services run sequentially and in parallel. Lumigo automatically identifies your worst latency offenders, including AWS Lambda cold starts.
  • Serverless-specific smart alerts – using machine learning, Lumigo’s predictive analytics identifies and alerts on issues before they impact application performance or costs, including alerts about AWS Lambda cold starts.

Get a free account with Lumigo resolve Lambda issues in seconds