Four Ways to Run Containers on AWS

Home Blog Four Ways to Run Containers on AWS

AWS provides multiple ways to deploy containerized applications. From small, ready-made WordPress instances on Lightsail, to managed Kubernetes clusters running hundreds of instances across multiple availability zones.

When deciding on the architecture of your application, you should consider building it serverless. Being free from (virtual) server management enables you to focus more on your unique business logic while reducing your operational costs and increasing your speed to market.

The most popular compute services to build serverless and containerized applications on AWS include:

  • AWS Lambda
  • AWS AppRunner
  • Amazon Elastic Container Service (ECS)
  • Amazon Elastic Kubernetes Service (EKS).

Each service has different scalability characteristics and restrictions, all while fitting unique use cases.

Which should you choose?

Let’s explore the mentioned options, starting from the most serverless (involving minimal management of infrastructure) to the least serverless.

AWS Lambda

AWS Lambda is a compute service that provides scalable computation using functions. All this really means is you can run your application code-specified as a function with input and output, without the need to provision or maintain any servers.

Your Lambda function can execute any logic that your application requires, within a maximum of 15 minutes of runtime and configurable computing resources in terms of CPU, memory, and filesystem.

For each invocation, the Lambda service creates an instance of the function. After completion, the created environment is kept idle for a while, waiting for further invocations. If the function gets invoked while the previous invocation is still in progress, the Lambda creates more instances of the function to handle these requests concurrently.

The process of generating new instances takes time, and this is known as a cold start.

AWS App Runner

AWS App Runner is a fully-managed platform for running your containers. It is designed to run request-response types of applications and is relatively easy to use. There is neither management of infrastructure nor orchestration required of you, and there is also no need to provide components like load balancers. You deploy your image and App Runner gives you back the URL of its endpoint in response.

App Runner scales based on the number of requests it serves. You define how many concurrent requests you want each instance of the service to handle, and when more requests are dispatched than what the current instances can collectively handle, new instances are spun up.

When the workload reduces, unneeded instances are shut down. In comparison with Lambda, the most relevant difference is that one instance of App Runner can handle multiple requests—defined during setup.

Amazon ECS

Amazon ECS is a fully-managed Amazon-proprietary container orchestration service that helps to deploy, manage, and scale containerized applications. In a nutshell, it is easier to use than Kubernetes and gives you more flexibility than App Runner.

Through a concept called a “Service,” Amazon ECS can scale the tasks (AKA the containers) it runs on your behalf based on different metrics, such as application CPU, memory, and load balancer requests. It is also possible to create custom scaling metrics provided by your application itself, giving you full control over scaling behavior.

Launch types

When using Amazon ECS you have two options running your workload, known as launch types:

  • EC2: You deploy and manage the EC2 instances that are underpinning your ECS clusters and tasks.
  • Fargate: Your ECS tasks are run on infrastructure managed entirely by AWS.

Fargate frees you from server management of worker nodes and provides a pay-as-you-go compute based on how many CPU cores and gigabytes of memory per second your task requires. This comes at the cost of some limitations, like not having access to all hardware capabilities that are available with EC2.

With the EC2 launch type, you must set up, pay for, maintain, and update the EC2 virtual machines that will host your containers. This allows you to optimize price, e.g., by using reserved instances. However, it is your responsibility to provide the right amount of computing resources to your containers.

Amazon EKS

Amazon EKS is a version of Kubernetes, provisioned and managed by AWS. As it is a Kubernetes service, it offers a very rich and ever-changing ecosystem of tools, resources, and community. In terms of running containers, it is the most flexible, and also the most complex, of the options that we’re covering together. Same as with ECS, your EKS workloads can either run on EC2 instances you control, or on Fargate nodes managed for you by AWS.

Amazon EKS offers virtually all facilities of “vanilla” Kubernetes and is integrated with AWS for, among others:

  • Authentication and authorization via IAM roles for Service Accounts. Security groups can be assigned to pods, instead of EC2 nodes, giving you more security and control over incoming and outgoing traffic to your application.
  • Ingress, meaning access to your Kubernetes workloads from the outside of the cluster, is provided via AWS Application Load Balancer (ALB); similarly, while Kubernetes services can be powered by AWS Network Load Balancer.

Amazon EKS is considered by many to be the best choice for running Kubernetes on AWS, with all the pros and cons that using Kubernetes to orchestrate your containers brings to the table.

Summary

Lambda App Runner ECS Fargate EKS Fargate
Use case Short-running tasks, cron jobs, application backends. Request/response applications. Various workloads Various, usually larger workloads.
Advantages Scaling to zero. Event-based integrations with most other AWS services. A simple way of running containers. Simplicity with flexibility.

Out-of-the-box integration with native AWS services.

Large community and ecosystem.

Various features and tools.

Limitations Max 15-minute runtime.

Cold starts.

No fine-grained configuration options.
Scalability Invoked per request. Scaling based on the number of requests. Different scaling metrics available. Different scaling metrics available.
Complexity 🔵 🔵 🔵 🔵 🔵 🔵 🔵 🔵
Pricing Duration (ms) of execution and memory configured. vCPU and memory fee, per second. Amount of vCPU, memory, and storage per hour. Amount of vCPU, memory, and storage per hour.

Observability of Serverless Containers

Observability includes three types of signals: metrics, traces, and logs that help you understand and monitor the health of your application.

In a distributed system, the correlation of different data is key when making your systems observable. A request can go from Application Load Balancer or API Gateway to your serverless container, regardless of where it is hosted. The service can then save data to S3 or DynamoDB. When something goes wrong, you need to get an alert, be able to troubleshoot, and drill down into all services included in that path.

To do it effectively you’ll need the proper tools to visualise the request journey and correlate it with your logs. You can find out more tips about observability of serverless containers in Monitoring and Troubleshooting Containerized Applications with Lumigo.