• Guide Content

Running AWS EKS on Fargate: A Practical Guide

What Is AWS EKS? 

Amazon Elastic Kubernetes Service (EKS) is an open-source, managed container service provided by Amazon Web Services (AWS). It allows you to run, scale, and manage containerized applications in a secure, scalable, and reliable environment. AWS EKS simplifies the process of running Kubernetes, a platform designed to automate deploying, scaling, and managing containerized applications, in the cloud without the need for maintaining your own Kubernetes infrastructure.

With AWS EKS, you can leverage all the benefits of Kubernetes, including its robust ecosystem and community support. It provides a seamless integration with other AWS services like Amazon RDS, Amazon S3, AWS IAM, and AWS CloudWatch, to name a few. This makes it easier for developers and system administrators to build, deploy, and scale applications with increased speed and reliability.

What Is AWS Fargate? 

AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). With Fargate, you can focus on building and operating your applications without having to manage the underlying infrastructure.

Traditionally, running containers required a lot of operational overhead. You had to choose and provision the right amount of servers, patch them, and ensure their security and reliability. With Fargate, all these tasks are taken care of by AWS. This means you only need to worry about your applications and their performance. However, Fargate tends to be more expensive than the equivalent computing resources on Amazon’s Elastic Compute Cloud (EC2).

How EKS Runs on Fargate 

Running EKS on Fargate combines the power of managed Kubernetes with the simplicity of serverless computing. With EKS on Fargate, you can run your Kubernetes applications without having to manage the underlying EC2 instances.

When you deploy a Kubernetes pod in your EKS cluster, Fargate automatically allocates the necessary CPU and memory, starts the pod on a worker node, and handles all administrative tasks related to the underlying server. This significantly simplifies your Kubernetes operations and reduces the operational overhead.

Running EKS on Fargate can also reduce costs because it runs exactly the computing resources required to run your pods. When running EKS on EC2, you first provision a certain number of EC2 instances, and then Kubernetes schedules your pods to those instances. This creates the possibility of over-provisioning or under-provisioning. With Fargate, computing resources are flexibly allocated to pods according to their Kubernetes requests and limits.

5 Key Considerations For Using Fargate on Amazon EKS 

While using Fargate on Amazon EKS provides many benefits, there are some considerations to keep in mind:

1. Each Pod that Runs on Fargate has its Own Isolation Boundary

When running pods on Fargate, each pod gets its own isolation boundary. This means that each pod runs in its own dedicated environment, isolated from other pods. This isolation ensures that your containers are secure and do not interfere with each other. However, it also means that you need to carefully manage the resources required by each pod to ensure optimal performance.

2. Daemonsets Are Not Supported on Fargate

A daemonset is a type of Kubernetes workload that ensures that a copy of a pod is running on each node in the cluster. However, daemonsets are not supported on Fargate. This is because Fargate abstracts away the underlying infrastructure, and each pod runs in its own isolated environment. If you need to run a daemonset, you will need to use EC2 instances instead of Fargate.

3. Privileged Containers Are Not Supported on Fargate

Privileged containers are containers that have more privileges than regular containers. They have access to all devices on the host and can perform privileged operations. However, privileged containers are not supported on Fargate. This is because Fargate aims to provide a secure and isolated environment for running containers. If you require privileged containers, you will need to use EC2 instances instead.

4. GPUs Are Not Currently Available on Fargate

If your application requires GPU resources, you will not be able to use Fargate. Currently, Fargate does not support GPU instances. If you need to run GPU workloads, you will need to use EC2 instances instead.

5. You Cannot Mount Amazon EBS Volumes to Fargate Pods

Fargate does not support mounting Amazon Elastic Block Store (EBS) volumes to pods. This means that if your application requires persistent storage using EBS volumes, you will need to use EC2 instances instead. EC2 instances allow you to mount EBS volumes and provide persistent storage for your applications.

Tutorial: Getting Started with AWS Fargate Using Amazon EKS 

Before you start, ensure you have the following prerequisites:

AWS Account: You need an AWS account to set up and use AWS EKS and Fargate.

  • AWS CLI: Install the AWS Command Line Interface (CLI) on your local machine.
  • kubectl: Install kubectl, the Kubernetes command-line tool.
  • eksctl: Install eksctl, a command-line tool for creating and managing EKS clusters.

Step 1: Create an EKS Cluster

First, create an EKS cluster using eksctl. Open your terminal and run the following command:

eksctl create cluster --name my-cluster --region us-west-2 --fargate

This command creates a new EKS cluster named my-cluster in the us-west-2 region and configures it to use Fargate.

Step 2: Configure kubectl

After creating the cluster, configure kubectl to connect to your EKS cluster:

aws eks --region us-west-2 update-kubeconfig --name my-cluster

This command updates your kubeconfig file with the necessary configuration to connect to your EKS cluster.

Step 3: Create a Fargate Profile

Create a Fargate profile to specify which pods should run on Fargate:

eksctl create fargateprofile \
    --cluster my-cluster \
    --name my-fargate-profile \
    --namespace default \
    --region us-west-2

The output should look something like this:

This command creates a Fargate profile named my-fargate-profile for the default namespace in your cluster.

Step 4: Deploy a Sample Application

Deploy a sample NGINX application to your EKS cluster. Create a deployment YAML file named nginx-deployment.yaml with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

Apply the deployment using kubectl:

kubectl apply -f nginx-deployment.yaml

Step 5: Verify the Deployment

Verify that the pods are running on Fargate:

The output should look something like this:

You should see that the pods have been scheduled on Fargate with no associated node name.

Step 6: Expose the NGINX Application

Expose the application using a Kubernetes service. Create a service YAML file named nginx-service.yaml with the following content:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: LoadBalancer
  ports:
    - port: 80
  selector:
    app: nginx

Apply the service using kubectl:

kubectl apply -f nginx-service.yaml

AWS EKS Monitoring with Lumigo

The distributed nature of containers (and microservices in general), whether running on AWS EKS, or another orchestrator, means that your applications will typically require more than just monitoring with metrics and logs. In order to keep an eye on the many different services these applications are composed of, distributed tracing is critical to keep applications up and running smoothly.

Lumigo is a cloud native observability platform that delivers automated distributed tracing, purpose-built for distributed applications, including those running on ECS and soon, EKS.

Lumigo provides deep visibility into applications and infrastructure with all the relevant information on each component, enabling you to easily monitor and troubleshoot container applications.

  • Automatically correlate metrics, events, and traces and delivers visualizations of end-to-end requests in one complete view
  • Drill down into application performance and monitor clusters as well as underlying services in real-time
  • Set up customized alerts in notification platforms (ie Slack) and go from alert to root cause in just a few clicks

Learn more about Lumigo