What Is Container Logging? 

  • Topics

Container Logging: The Basics & 5 Useful Logging Techniques

What Is Container Logging? 

Containers are widely used in software development and deployment. However, the ephemeral nature of containers makes monitoring and observability more complex. Keeping track of containers is essential to monitor and manage containerized applications.

Maintaining and managing logs is a key part of an effective container management strategy. Logging provides visibility into containers, enabling troubleshooting and improving performance.

This is part of a series of articles about container monitoring.

In this article:

The Importance of Logging Containerized Applications 

Logging is one of the most important considerations when developing containerized applications. Log management allows teams to debug and resolve issues faster, making it easier to catch errors, find the bugs that caused them, and prevent recurrences of production issues.

As the software stack evolves from hardware-centric infrastructure to containerized microservices, many things have changed, but logging remains a critical concern. The classic metrics of availability, latency, and failures per second are still important, but they are not enough to get the full picture of a containerized environment.

Using Docker or other container engines in production requires detection of issues, root cause investigation, and extensive data to help teams troubleshoot issues quickly. Container-compatible log analysis tools can collect log messages from across a containerized environment to generate a complete picture of events in your application.

Related content: Read our guide to containerized applications (coming soon)

Container Logging Challenges

IT admins can spin up and destroy containers faster than VMs, allowing for rapid scaling. Therefore, many containers are short-lived, often lasting only a few minutes or hours.

The transient nature of containers makes them well-suited for agile software development. However, containers also bring new virtualization management challenges, especially when it comes to logs and log management.

Ideally, containers are stateless entities that do not store persistent data. This makes collecting and storing logs difficult—for example, stopping a container usually destroys both the container and any associated log data, unless the data is exported to a storage resource.

Container logging is viewed from multiple perspectives. There are three different environments or levels that allow containers to run applications. These include a container engine like the Docker daemon, shared hosting operating systems, and applications running in a container. Proper logging in a containerized environment requires identifying and correlating log events across all these environments so that IT and development teams can trace the root cause of problems.

Container security adds complexity to logging. Operating system vulnerabilities affect all containers that share a common kernel. To improve container security, some organizations run containers inside VMs. However, this complicates logs, because administrators must record not only applications, engines, and host operating systems, but also accompanying VM and hypervisor activity.

Container Logging With OpenTelemetry 

OpenTelemetry is an open source observability framework which is commonly used to collect telemetry from containerized environments. OpenTelemetry is used to standardize observability without requiring engineers to re-instrument code to install proprietary agents—even if they change their runtime environment. For example, if an organization starts using a different container runtime, they don’t need to re-instrument their code to obtain metrics.

With OpenTelemetry, container logs can be correlated with other observability data. For example, events occurring in a container can be correlated with events occurring at the same time, or within the same execution context, such as events from applications running within the container or on the same host.

Another important capability of OpenTelemetry is correlating metrics by rResource context. All OpenTelemetry traces and metrics contain information which rResource they originated from, and this can help correlate container metrics with those from functionally related components in the environment.

5 Container Logging Techniques

Here are some of the most common techniques for collecting logs from containerized environments.

1. Logging via Docker Logging Drivers

Docker provides a log driver built into the container that acts as a log management system. The driver reads container outputs (data broadcast over the stdout and stderr streams), formats the log, and saves it to a file on the host or defined endpoint. This method provides a performance improvement, as the container no longer needs to write and read log files internally. 

The type of driver determines the format and location of the log. By default, Docker uses the JSON file driver to write logs in JSON format. Please note that with the default Docker log driver, you can only ship unparsed logs. If the TCP server cannot be reached, the container is killed.

Other built-in drivers can be used to forward the collected records to a log service, log shipper, or centralized logging service. Docker also lets you create custom logging drivers and add them as plugins to the container.

2. Direct vs. Non-blocking Log Delivery

The delivery mode determines the priority of the message and how the message is delivered from the container to the logging driver. Docker has two delivery methods:

  • Direct delivery—direct (blocking) delivery is the default mode, which suspends the application when new messages appear and sends all messages directly to the driver. With this option, all output is logged immediately. However, if the logging driver gets busy, it can affect application performance.
  • Non-blocking delivery—this mode involves an intermediate ring buffer inside a container which stores logs until the log driver processes them. When the log driver is busy, the log messages are stored in memory and are only delivered when the driver is ready to process another. This has no effect on application performance, but can result in late delivery of critical log data. In some cases, log data can even be lost, if the container generates too much data for the buffer to hold, or if volatile memory is destroyed unexpectedly. Solutions like FireLens for Amazon ECS allows for buffering of unprocessed log data to a file system, before forwarding it to the agent.

3. Logging via the Application

Applications inside containers can also use their own logging frameworks to process logs. For example, a Java application can use Log4j2 to bypass Docker and the operating system, format and send application logs to a remote central location.

This approach gives developers maximum control over logging events. However, there are two drawbacks:

  • Application-based logging places an additional burden on the application process and can affect performance. 
  • If the logging framework saves data to the container itself, all logs stored in the container’s file system are lost when the container is shut down.

4. Persisting Logs Using Data Volumes

As mentioned above, one way to achieve persistence for container data is to use data volumes.

This approach creates a directory inside the container and connects to the directory on the host. Whatever happens to the container, this directory stores long-term or commonly shared data. You can now make copies, perform backups, and access logs from other containers.

You can also share volumes between multiple containers. However, data volumes make it difficult to move containers to different hosts. Container orchestrators like Kubernetes can help manage persistent data volumes in a scalable manner.

5. Using a Dedicated Logging Container

Another option is to collect and manage Docker logs using a dedicated host-independent log container. This container collects log files from your Docker environment, monitors and inspects the logs, and sends them to a central location.

Log containers are self-contained units, so they can be easily moved between different environments. This also makes it easy to scale your logging infrastructure by adding more logging containers. However, you need to carefully define your applications and the logging container to ensure all logs are recorded.

Container Observability with Lumigo

Lumigo is a cloud native observability tool, purpose-built to navigate the complexities of microservices. Through agentless automated tracing, Lumigo stitches together asynchronous requests across the many distributed components that make up a cloud native app. From ECS to third party APIs Lumigo visualizes requests in one complete view, and monitors every service that a request passes through. Leveraging the end-to-end observability that Lumigo provides, as well as the many features that make debugging container apps easy, developers have everything they need to find and fix errors and issues fast:

With Lumigo users can:

  • See the end-to-end path of a container request and full system map of applications
  • Monitor and debug third party APIs and managed services (ex. Amazon DynamoDB, Twilio, Stripe)
  • Integrate alerts with notification platforms like Slack and go from alert to root cause analysis in just a few clicks
  • Explore application performance to understand system behavior and optimize performance and costs 

Get started with a free trial of Lumigo for your microservice applications 

Debug fast and move on.

  • Resolve issues 3x faster
  • Reduce error rate
  • Speed up development
No code, 5-minute set up
Start debugging free