Serverless Deployments: 5 Deployment Strategies and Best Practices

  • Topics

What Are Serverless Deployments? 

Serverless deployment refers to a cloud computing execution model where the cloud provider dynamically allocates and manages the server resources. The term ‘serverless’ might be somewhat misleading; it does not mean there are no servers involved. It simply implies the developers do not have to worry about the server management and maintenance.

In serverless deployments, the cloud provider takes care of the server infrastructure, leaving developers free to focus on writing the application code. Serverless deployments are event-driven, meaning the code only runs in response to specific triggers or events. This could be a change in a database, a request to an endpoint, a file upload, or any other definable event. This approach ensures efficient use of resources, as you only pay for actual compute time used.

We’ll explore how serverless deployment differs from a traditional software deployment model, key strategies for serverless deployments, and best practices to help make your deployments a success.

This is part of a series of articles about serverless monitoring.

How Does Serverless Software Deployment Work? 

In a serverless deployment, an event triggers the execution of a function. These events could be anything from a user clicking a button on a website to a record being updated in a database. The serverless platform listens for these events and executes the corresponding function when an event occurs.

The function, written by the developer, contains the logic needed to perform a task. It’s executed in a stateless container that’s created just in time to process the event. Once the function has completed its task, the container is discarded.

Serverless deployments also handle scaling automatically. If an event rate increases, the serverless platform will automatically create more instances of the function to meet demand.

Learn more in our detailed guide to serverless observability 

5 Serverless Deployment Strategies 

1. All-At-Once Deployment (Direct Deployment)

All-at-once deployment, also known as direct deployment, is the simplest serverless deployment strategy. In this strategy, the application’s new version is deployed to all instances simultaneously. If the deployment is successful, the new version of the application replaces the old one. If the deployment fails, you need to roll back the changes by deploying the previous version of the application.

The main advantage of all-at-once deployment is its simplicity and speed. However, this strategy does not provide a safety net in case of deployment failure. It also does not allow for testing the new version in a production environment before full deployment.

How this works in AWS Lambda:

AWS Lambda makes all-at-once deployment easy. When you’re ready to deploy, you can upload your new function code to AWS Lambda. AWS will then replace the existing version of your function with the new one instantly. In case of failure, you need to upload the previous version of your function to rollback changes.

How this works in Azure Functions:

For Azure Functions, you can use the Azure Portal, Azure CLI, or Azure PowerShell to upload your new function code. The existing function is then replaced with the new one. If something goes wrong, you need to re-deploy the old version of the function.

2. Blue-Green Deployment

Blue-green deployment is a strategy that minimizes downtime and risk by running two identical production environments, known as Blue and Green. At any time, only one of the environments is live. For example, if Blue is currently live, then Green is idle.

When you want to deploy a new version of your application, you deploy it to the idle environment (Green). Once the new version is tested and ready to go live, you switch the router so all incoming requests now go to Green instead of Blue. The old environment (Blue) is now idle and can be used for the next deployment.

How this works in AWS Lambda:

AWS Lambda supports blue-green deployments through its alias feature. You can create two aliases (blue and green), each pointing to different versions of your function. When you’re ready to switch traffic to a new version, you just need to update the alias to point to the new version of your function.

How this works in Azure Functions:

Azure Functions support blue-green deployments using deployment slots. You can create a “staging” slot to deploy and test your new function code. When you’re ready to switch traffic, you can swap the “staging” slot with the “production” slot.

3. Canary Deployment

Canary deployment is a strategy where you gradually roll out the new version of an application to a small subset of users before rolling it out to the entire infrastructure. The new version is deployed alongside the old version, with a small portion of user traffic directed to the new version.

This approach allows you to test the new version in a live production environment with minimal impact. If everything goes well, you can gradually increase the traffic to the new version until it handles all requests. If something goes wrong, you can quickly roll back the changes with minimal impact on users.

How this works in AWS Lambda:

AWS Lambda, with the help of AWS CodeDeploy, can automate canary deployments. You can specify the percentage of traffic that you want to route to the new version of your function and gradually increase this percentage over time. If something goes wrong, you can quickly rollback to the previous version.

How this works in Azure Functions:

Azure Functions, with the help of Azure Traffic Manager, can handle canary deployments. You can configure Azure Traffic Manager to route a specific percentage of traffic to different versions of your function.

4. A/B Testing

A/B testing, also known as split testing, is a strategy where you deploy two or more versions of your application simultaneously to see which one performs better. With A/B testing, you can experiment with different features, designs, workflows, etc., and collect data on how users interact with each version.

The main advantage of A/B testing is that it allows you to make data-driven decisions based on actual user behavior. However, this strategy requires more effort to set up and manage compared to other strategies.

How this works in AWS Lambda:

AWS Lambda also supports A/B testing using its alias and version features. You can create multiple versions of your function, each with different code, and route a specific percentage of traffic to each version by configuring the alias.

How this works in Azure Functions:

Azure Functions supports A/B testing using deployment slots. You can create different slots for different versions of your function and use Azure Traffic Manager to route a specific percentage of traffic to each version.

5. Shadow Deployment

Shadow deployment is a strategy where you deploy the new version of your application alongside the old version, but without sending any user traffic to the new version. Instead, you “shadow” the user traffic from the old version to the new version for testing purposes.

Shadow deployment allows you to test the new version in a live production environment without affecting users. This strategy is particularly useful for testing the performance and reliability of the new version under real-world conditions.

How this works in AWS Lambda:

AWS Lambda doesn’t directly support shadow deployments. However, you can achieve this by using Amazon API Gateway and AWS Lambda together. You can configure the API Gateway to duplicate incoming requests and send them to both versions of your function.

How this works in Azure Functions:

For shadow deployments, Azure Functions requires a more complex setup with Azure API Management. By creating separate API policies for the new and old versions of your function, you can duplicate incoming requests to “shadow” traffic from the old version to the new one.

Serverless Deployment Best Practices 

Proper Handling of Secrets

When deploying serverless applications, it’s essential to handle secrets like API keys, database credentials, and other sensitive information securely. Storing secrets in plaintext in your code or configuration files can lead to serious security vulnerabilities.

The best practice is to use the secrets management service provided by your cloud provider, or a third party secrets management solution. These solutions store secrets securely and provide them to your application at runtime. This approach ensures your secrets are not exposed in your code or configuration files.

Limiting Permissive IAM Policies

IAM (Identity and Access Management) policies define what actions your serverless functions can perform on your cloud resources. Overly permissive policies can lead to unauthorized access and potential data breaches.

It’s best practice to follow the principle of least privilege (PoLP) when defining your IAM policies. This principle states that a function should have only the permissions necessary to perform its task and no more. Implementing this principle can significantly reduce the risk of unauthorized access to your resources.

Restriction of Deployment Time

In serverless deployments, functions are stateless and can be started and stopped at any time. However, there can be a delay (cold start) when a function is invoked after being idle for some time.

To minimize the impact of cold starts on your application’s performance, it’s recommended to restrict the deployment time of your functions. By scheduling your deployments during off-peak hours, you can ensure your functions are always warm and ready to respond to events quickly.

Function Naming and Descriptions

Finally, it’s important to follow best practices for function naming and descriptions. Function names should be unique, descriptive, and follow a consistent naming convention. This makes it easier to identify and manage your functions.

Similarly, function descriptions should clearly describe the function’s purpose and behavior. This can be especially helpful when troubleshooting issues or when onboarding new team members.

Monitoring Serverless Deployments with Lumigo

Lumigo is a troubleshooting platform, purpose-built for serverless applications. Developers using AWS Lambda and other serverless services can use Lumigo to monitor, trace and troubleshoot issues fast. Deployed with zero-code changes and automated in one-click, Lumigo stitches together every interaction between every API and managed service into end-to-end stack traces. These traces, served alongside http request payload data, gives developers complete visibility into their serverless environments. Using Lumigo, developers get:

  • End-to-end virtual stack traces across every micro and managed service that makes up a serverless application, in context
  • API visibility that makes all the data passed between services available and accessible, making it possible to perform root cause analysis without digging through logs 
  • Distributed tracing that is deployed with no code and automated in one click 
  • Unified platform to explore and query across microservices, see a real-time view of applications, and optimize performance

Get started with a free trial of Lumigo for your serverless applications

Debug fast and move on.

  • Resolve issues 3x faster
  • Reduce error rate
  • Speed up development
No code, 5-minute set up
Start debugging free