3 Kubernetes Logging Methods with Examples

  • Topics

What Is Kubernetes Logging? 

Kubernetes logging captures, stores, and manages log files generated by the applications and system components running within Kubernetes clusters. These logs provide insights into application performance, system behavior, and debug information, making them useful for monitoring cluster health and troubleshooting issues.

Kubernetes does not offer a native storage solution for log data; instead, it relies on external systems for log aggregation and analysis. This requires an efficient logging strategy to ensure critical information is accessible and interpretable when needed. Thus, it is essential to understand the various types of logs and the methods used for their collection.

This is part of a series of articles about Kubernetes monitoring

Why Is Kubernetes Logging Important? 

Kubernetes logging is crucial for maintaining applications’ operational health. By offering real-time insights into application behavior and system performance, logging enables developers and operators to swiftly identify and resolve problems, minimizing downtime and improving user experience.

Additionally, logging supports compliance with security and privacy regulations by enabling the tracking of access to applications and data. This makes it possible to audit system activity, an essential requirement for many industries.

4 Types of Kubernetes Logs 

Logs in Kubernetes can be classified according to the components they cover.

1. Kubernetes Container Logs

Container logging in Kubernetes focuses on capturing stdout and stderr streams from containers. These logs are vital for understanding application behavior as they contain messages from the applications running in the containers. Kubernetes automatically collects these logs and allows them to be accessed via the kubectl logs command.

However, the lifespan of these logs is tied to the container’s lifespan. When a container crashes or is deleted, its logs are also lost. This transient nature of container logs requires integrating an external logging solution for log persistence and centralized analysis.

2. Kubernetes Node Logs

Node logging refers to logs generated by the Kubernetes nodes, including system daemons like kubelet, the container runtime, and the operating system. These logs are integral for diagnosing system-level issues and understanding the behavior of the Kubernetes infrastructure components.

Given their critical role in cluster diagnostics, node logs are often managed through centralized logging solutions. These solutions aggregate logs from all nodes, simplifying analysis and troubleshooting tasks across multiple cluster nodes.

3. Kubernetes Cluster Logs

Kubernetes cluster logs provide operational data from components of a Kubernetes cluster, including the master and worker nodes, as well as any subsystems and services that operate within the cluster infrastructure. These logs are crucial for understanding the cluster’s state and activities, providing insights into the orchestration layer’s decisions, the scheduling and management of containers, and the interactions between different components.

Cluster logs include logs from the Kubernetes scheduler, which decides which pods to run and where to run them; the controller manager, which handles background tasks; and the API server, which serves as the front end to the cluster’s shared state. These logs are key to diagnosing problems that affect the scheduling and operation of pods and issues with the Kubernetes API.

4. Kubernetes Events

Kubernetes events record notable changes in the cluster, such as a pod’s lifecycle events or a deployment’s state transitions. These events provide a high-level overview of cluster operations and are useful for auditing and historical analysis.

Events complement traditional log files by offering context around the state changes within the cluster. They can be monitored in real-time using the Kubernetes API or kubectl, providing a quick way to detect and respond to operational issues.

Kubernetes Logging Methods with Examples 

There are several ways to collect logs in Kubernetes.

Using Stdout and Stderr

The simplest Kubernetes logging method involves applications writing their logs to stdout and stderr. Kubernetes automatically collects and stores these logs, making them accessible via the kubectl logs command. This method benefits from simplicity and ease of use, requiring no changes to Kubernetes configurations or deployments.

Though straightforward, relying solely on stdout and stderr has limitations. Logs are tied to the lifecycle of their containers. They cannot be centrally analyzed or persisted without an external aggregation mechanism, which is critical for long-term log management strategies.

Here is a basic example of a Kubernetes pod definition that uses stdout and stderr for logging. In this configuration, the application running in the my-app container writes its logs to stdout and stderr:

apiVersion: v1
kind: Pod
  name: example-pod
  - name: my-app
    image: nginx
    args: [/bin/sh, -c, 'while true; do echo $(date); sleep 1; done']

Kubernetes captures these logs automatically, and they can be accessed using the command: 

kubectl logs example-pod my-app

Using Logging Agents

Logging agents are deployed within a Kubernetes cluster to collect, forward, and aggregate logs. They run as daemon sets or sidecar containers, ensuring logs from all nodes and containers are centralized for analysis. This method simplifies log management by abstracting the complexities of log collection, enabling easy integration with external logging services.

Logging agents vary in complexity and functionality, with some offering advanced features like log parsing, enrichment, and filtering. Choosing the right agent depends on specific logging requirements, such as the need for real-time processing or compatibility with specific external log analyzers.

The manifest below deploys Fluentd as a logging agent across all nodes in a Kubernetes cluster using a DaemonSet. Fluentd collects, transforms, and ships logs to an Elasticsearch backend. The deployment is configured to limit resource usage, ensuring that logging does not adversely affect node performance.

apiVersion: apps/v1
kind: DaemonSet
  name: fluentd-logging
  namespace: kube-system
      name: fluentd-logging
        name: fluentd-logging
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1
          value: "elasticsearch-logging"
          value: "9200"
            memory: 200Mi
            cpu: 100m
            memory: 200Mi
      terminationGracePeriodSeconds: 30

Using Application-Level Logging Configuration

Application-level logging configuration involves customizing an application’s logging mechanisms to suit specific logging strategies. This can include formatting logs in a consistent structure, tagging logs with metadata, and configuring log rotation and retention policies.

By configuring logging at the application level, developers can ensure that logs are more informative and easier to analyze. This approach requires more initial setup and ongoing management but offers flexibility and control over the content and log data handling.

Here is an example of a pod configuration that enables application-level logging. This configuration assumes the application emits logs in a structured JSON format at a specific log level (e.g., INFO). Environment variables are used to configure the log format and level:

apiVersion: v1
kind: Pod
  name: custom-logger-app
  - name: my-custom-logger
    image: my-custom-logger-image
    - name: LOG_LEVEL
      value: "INFO"
    - name: LOG_FORMAT
      value: "json"

Note: The above YAML code assumes that an image called my-custom-logger-image exists.

Kubernetes Logging Best Practices 

Here are some best practices to ensure effective log collection and management in Kubernetes.

Keep Log Formats Consistent

Consistent log formats simplify log analysis by ensuring that log entries across services and components are standardized. This uniformity facilitates automated parsing and analysis, enabling quicker identification of trends and anomalies within log data.

A common logging format like JSON can enhance log readability and interoperability with logging tools and services. Establishing organization-wide logging standards is, therefore, a crucial step in streamlining Kubernetes log management processes.

Control Access to Logs with RBAC

Role-Based Access Control (RBAC) in Kubernetes enables fine-grained control over who can access logs. By defining roles and permissions for log data, organizations can safeguard sensitive information and comply with data protection regulations.

Configuring RBAC aids in maintaining log security, particularly in multi-tenant environments where users must be restricted to viewing only the logs relevant to their applications. This ensures that log data does not become a liability and reduces the risk of compliance violations.

Use Sidecar Containers with Logging Agents

Logs can be captured, processed, and forwarded independently of the application’s primary functionality by deploying a sidecar container alongside the application container. This separation of concerns enhances logging flexibility and allows for more sophisticated processing and routing of log data.

Choosing the right sidecar container depends on the application’s specific needs and the broader logging architecture. However, this pattern inherently supports scalability and adaptability, making it well-suited to dynamic Kubernetes environments.

Set Resource Limits on Log Collection Daemons

Resource limits on log collection daemons prevent them from consuming excessive resources and potentially impacting the performance of the host node. Kubernetes allows for configuring memory and CPU limits on daemon sets, ensuring logging operations remain within acceptable parameters.

Implementing resource quotas for logging services is essential for maintaining cluster stability, especially in environments where log volumes vary significantly. It balances the need for comprehensive logging with efficient resource utilization.

Related content: Read our guide to Kubernetes monitoring best practices

Kubernetes Monitoring and Troubleshooting with Lumigo

Lumigo is a troubleshooting platform that is purpose-built for microservice-based applications. Developers using Kubernetes to orchestrate their containerized applications can use Lumigo to quickly monitor, trace and troubleshoot issues. Deployed with zero-code changes and automated in one click, Lumigo stitches every interaction between micro and managed service into end-to-end stack traces. These traces served alongside request payload data, give developers complete visibility into their container environments. Using Lumigo, developers get:

End-to-end virtual stack traces across every micro and managed service that makes up a serverless application, in context

  • API visibility that makes all the data passed between services available and accessible, making it possible to perform root cause analysis without digging through logs 
  • Distributed tracing that is deployed with no code and automated in one click 
  • A unified platform to explore and query across microservices, see a real-time view of applications, and optimize performance

To try out more about Lumigo for Kubernetes, check out our Kubernetes operator on GitHub

Debug fast and move on.

  • Resolve issues 3x faster
  • Reduce error rate
  • Speed up development
No code, 5-minute set up
Start debugging free