Kubernetes Exit Code 1 (SIGTERM): Causes and Troubleshooting

  • Topics

What Is the Exit Code 1 Error? 

Exit Code 1 is an operating system signal that indicates an application terminated with an error. In Kubernetes, it typically indicates that a container in a pod failed to run correctly, terminating with an error. This generic exit code signifies an unspecified error within the application or the container’s environment. It doesn’t point to a specific issue, making it a common but challenging problem to diagnose in Kubernetes environments.

Diagnosing and resolving Exit Code 1 errors requires a thorough check of the container’s environment, application logs, and Kubernetes pod states. To identify the root cause and resolve the issue, it is essential to explore various potential issues, such as configuration errors and resource constraints, as the cause cannot be pinpointed.

This is part of a series of articles about Kubernetes troubleshooting

Common Scenarios Leading to Exit Code 1 in Kubernetes 

An Exit Code 1 error may occur in the following situations.

Container Configuration Issues

Mistakes like wrong image names, incorrect or missing environment variables, or improperly configured volumes can prevent a container from starting, leading to this error.

To prevent these issues, it’s essential to verify container configurations before deployment thoroughly. Using tools like Kubernetes linter can automate identifying misconfigurations, ensuring containers are correctly set up to run in the Kubernetes environment.

Failed Health Checks

Kubernetes uses liveness and readiness probes to check the health of containers. If these checks fail, Kubernetes may restart the container, resulting in Exit Code 1. The failure could stem from application errors or issues like incorrect probe paths or timings.

Ensuring health checks are accurately configured and correspond to the application’s requirements is crucial. Proper configuration helps Kubernetes manage container health effectively, reducing instances of unexpected terminations.

Dependency Issues Inside Containers

Containers failing due to missing or incompatible dependencies inside the container can also trigger Exit Code 1. Applications might require specific library versions or external services to be available at runtime.

To address this, ensure all dependencies are correctly specified in the container’s Dockerfile or setup scripts. Testing containers in an environment similar to production can help identify and resolve dependency issues before deployment.

Resource Limit Constraints

Setting too restrictive resource limits in Kubernetes can cause containers to fail with Exit Code 1. If a container exceeds its allocated CPU or memory limit, Kubernetes may terminate it forcefully, leading to this error.

Regular monitoring and adjusting resource allocations based on usage trends can prevent such issues. Using Kubernetes’ Horizontal Pod Autoscaler (HPA) can help automatically adjust resources based on demand, maintaining optimal performance and stability.

Improper Signal Handling

Applications not handling termination signals correctly may exit when Kubernetes attempts to shut down a container gracefully. This often occurs if the application ignores or mishandles signals like SIGTERM, which Kubernetes uses to request a graceful shutdown.

Applications should be designed to listen for and correctly handle termination signals. Implementing graceful shutdown procedures is important to ensure that containers exit cleanly, which can prevent unexpected error codes and improve the overall stability of the system.

Diagnosing and Troubleshooting Exit Code 1 Error in Kubernetes 

Let’s look at some of the ways to investigate and resolve an Exit Code 1 error.

Check Container Logs

The first step in troubleshooting Exit Code 1 is to examine the logs of the failed container. Logs can provide insights into the error that caused the container to terminate, offering clues for further investigation.

Using kubectl logs <pod-name> helps identify specific issues within the application or container environment leading to the error. Analyzing logs for error messages or stack traces is essential in diagnosing the root cause of the failure.

Troubleshooting example

Suppose you run the kubectl logs command and identify the following log message for a failed container:

Error: Cannot find module 'express'
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:636:15)
    at Function.Module._load (internal/modules/cjs/loader.js:562:25)
    at Module.require (internal/modules/cjs/loader.js:692:17)
    at require (internal/modules/cjs/helpers.js:25:18)

This error indicates that the Node.js application within the container failed because it could not find the ‘express’ module, a common framework for building web applications in Node.js. The absence of this module suggests that there was either an issue with the application’s dependencies during the build phase or that the container does not have access to the necessary modules at runtime.

Verifying Container and Application Configurations

Incorrect configurations are another common cause of Exit Code 1. Verify the container’s environment variables, mounted volumes, and other configurations match the application’s requirements.

Cross-checking application settings within the container against expected values is crucial. Ensure configurations are consistent across environments to prevent issues that may not appear in development or testing phases.

Troubleshooting example

Consider a scenario where your application requires access to a specific environment variable to connect to a database, named DATABASE_URL. The container’s configuration might be checked using a Kubernetes manifest file snippet like this:

env:
  - name: DATABASE_URL
    value: "https://example.com/db"

If the application logs or errors point towards a database connection issue, it is important to verify the environment variable DATABASE_URL. If the application can’t find or connect to the required database due to a missing or incorrect environment variable, this will lead to an Exit Code 1 error.

Check Container Resources

Insufficient or misallocated resources can cause containers to exit unexpectedly. Use kubectl describe pod <pod-name> to review the pod’s resource allocations and utilization.

If resource limits are too low, consider increasing them based on the application’s needs. Monitoring tools can help track resource usage over time, guiding appropriate adjustments to prevent similar issues.

Troubleshooting example

Suppose you run kubectl describe pod and receive an output that includes the following:

Containers:
  my-container:
    ...
    Limits:
      cpu: 500m
      memory: 256Mi
    Requests:
      cpu: 250m
      memory: 128Mi
    ...
    Last State: Terminated
      Reason: OOMKilled
      Exit Code: 1

This output indicates that the container my-container was killed because it exceeded its memory limit (OOMKilled stands for Out-Of-Memory Killed). 

In this case, the root cause of the Exit Code 1 error is insufficient memory allocation. To resolve this issue, you would need to adjust the memory limits upwards, based on the application’s needs and previous usage patterns observed through monitoring tools.

Best Practices to Deal with Exit Code 1 Error 

Utilize Liveness Probes

Liveness probes help Kubernetes determine if a container is running as expected. Properly configured liveness probes can prevent issues leading to Exit Code 1 by restarting containers that become unresponsive due to internal errors.

Configuring liveness probes with appropriate initial delay and timeout values is crucial. This ensures the application has enough time to start up before Kubernetes performs health checks.

Use Init Containers

Init containers run before the application containers and can perform setup tasks such as configuration checks or dependency installations. They can help ensure the application environment is correctly prepared before the main containers start, reducing the chances of Exit Code 1 errors.

Designing init containers to check and set up necessary conditions can prevent runtime errors in application containers. They add a layer of pre-validation, ensuring the environment meets all requirements.

Check Container Entrypoint

The container’s entrypoint script is crucial for initializing the application environment. Errors in the entrypoint script, such as incorrect path references or permissions, can lead to Exit Code 1.

Review and test the entrypoint script thoroughly to ensure it executes as expected. It should correctly set up the environment and handle any pre-start tasks required for the containerized application.

Don’t Use Fixed Paths

Hardcoded paths in configuration or application code can cause issues when running in Kubernetes due to its dynamic nature. Avoiding fixed paths and instead using environment variables or Kubernetes ConfigMaps and Secrets is advisable.

Dynamic path configurations allow more flexibility and adaptability, reducing potential errors related to file or directory access. Implementing this practice helps ensure the application’s portability and reduces configuration-related errors.

Kubernetes Troubleshooting with Lumigo

Lumigo is a troubleshooting platform, purpose-built for microservice-based applications. For anyone using Kubernetes to orchestrate containerized applications, Lumigo can be used to monitor, trace, and troubleshoot issues fast. Deployed with zero-code changes and automated in one-click, Lumigo stitches together every interaction between micro and managed service into end-to-end stack traces. These traces, served alongside request payload data, enabling complete visibility into container environments. Using Lumigo enables the ability to get:

  • End-to-end virtual stack traces across every micro and managed service that makes up a serverless application, in context
  • API visibility that makes all the data passed between services available and accessible, making it possible to perform root cause analysis without digging through logs
  • Distributed tracing that is deployed with no code and automated in one click
  • A unified platform to explore and query across microservices, see a real-time view of applications and optimize performance

To try Lumigo for Kubernetes, check out our Kubernetes operator on GitHub.

Debug fast and move on.

  • Resolve issues 3x faster
  • Reduce error rate
  • Speed up development
No code, 5-minute set up
Start debugging free