Containers are widely used in software development and deployment. However, the ephemeral nature of containers makes monitoring and observability more complex. Keeping track of containers is essential to monitor and manage containerized applications.
Maintaining and managing logs is a key part of an effective container management strategy. Logging provides visibility into containers, enabling troubleshooting and improving performance.
This is part of a series of articles about container monitoring
In this article:
In this article
Logging is one of the most important considerations when developing containerized applications. Log management allows teams to debug and resolve issues faster, making it easier to catch errors, find the bugs that caused them, and prevent recurrences of production issues.
As the software stack evolves from hardware-centric infrastructure to containerized microservices, many things have changed, but logging remains a critical concern. The classic metrics of availability, latency, and failures per second are still important, but they are not enough to get the full picture of a containerized environment.
Using Docker or other container engines in production requires detection of issues, root cause investigation, and extensive data to help teams troubleshoot issues quickly. Container-compatible log analysis tools can collect log messages from across a containerized environment to generate a complete picture of events in your application.
Related content: Read our guide to containerized applications (coming soon)
IT admins can spin up and destroy containers faster than VMs, allowing for rapid scaling. Therefore, many containers are short-lived, often lasting only a few minutes or hours.
The transient nature of containers makes them well-suited for agile software development. However, containers also bring new virtualization management challenges, especially when it comes to logs and log management.
Ideally, containers are stateless entities that do not store persistent data. This makes collecting and storing logs difficult—for example, stopping a container usually destroys both the container and any associated log data, unless the data is exported to a storage resource.
Container logging is viewed from multiple perspectives. There are three different environments or levels that allow containers to run applications. These include a container engine like the Docker daemon, shared hosting operating systems, and applications running in a container. Proper logging in a containerized environment requires identifying and correlating log events across all these environments so that IT and development teams can trace the root cause of problems.
Container security adds complexity to logging. Operating system vulnerabilities affect all containers that share a common kernel. To improve container security, some organizations run containers inside VMs. However, this complicates logs, because administrators must record not only applications, engines, and host operating systems, but also accompanying VM and hypervisor activity.
OpenTelemetry is an open source observability framework which is commonly used to collect telemetry from containerized environments. OpenTelemetry is used to standardize observability without requiring engineers to re-instrument code to install proprietary agents—even if they change their runtime environment. For example, if an organization starts using a different container runtime, they don’t need to re-instrument their code to obtain metrics.
With OpenTelemetry, container logs can be correlated with other observability data. For example, events occurring in a container can be correlated with events occurring at the same time, or within the same execution context, such as events from applications running within the container or on the same host.
Another important capability of OpenTelemetry is correlating metrics by rResource context. All OpenTelemetry traces and metrics contain information which rResource they originated from, and this can help correlate container metrics with those from functionally related components in the environment.
Here are some of the most common techniques for collecting logs from containerized environments.
Docker provides a log driver built into the container that acts as a log management system. The driver reads container outputs (data broadcast over the stdout and stderr streams), formats the log, and saves it to a file on the host or defined endpoint. This method provides a performance improvement, as the container no longer needs to write and read log files internally.
The type of driver determines the format and location of the log. By default, Docker uses the JSON file driver to write logs in JSON format. Please note that with the default Docker log driver, you can only ship unparsed logs. If the TCP server cannot be reached, the container is killed.
Other built-in drivers can be used to forward the collected records to a log service, log shipper, or centralized logging service. Docker also lets you create custom logging drivers and add them as plugins to the container.
The delivery mode determines the priority of the message and how the message is delivered from the container to the logging driver. Docker has two delivery methods:
Applications inside containers can also use their own logging frameworks to process logs. For example, a Java application can use Log4j2 to bypass Docker and the operating system, format and send application logs to a remote central location.
This approach gives developers maximum control over logging events. However, there are two drawbacks:
As mentioned above, one way to achieve persistence for container data is to use data volumes.
This approach creates a directory inside the container and connects to the directory on the host. Whatever happens to the container, this directory stores long-term or commonly shared data. You can now make copies, perform backups, and access logs from other containers.
You can also share volumes between multiple containers. However, data volumes make it difficult to move containers to different hosts. Container orchestrators like Kubernetes can help manage persistent data volumes in a scalable manner.
Another option is to collect and manage Docker logs using a dedicated host-independent log container. This container collects log files from your Docker environment, monitors and inspects the logs, and sends them to a central location.
Log containers are self-contained units, so they can be easily moved between different environments. This also makes it easy to scale your logging infrastructure by adding more logging containers. However, you need to carefully define your applications and the logging container to ensure all logs are recorded.
Lumigo is a cloud native observability tool, purpose-built to navigate the complexities of microservices. Through agentless automated tracing, Lumigo stitches together asynchronous requests across the many distributed components that make up a cloud native app. From ECS to third party APIs Lumigo visualizes requests in one complete view, and monitors every service that a request passes through. Leveraging the end-to-end observability that Lumigo provides, as well as the many features that make debugging container apps easy, developers have everything they need to find and fix errors and issues fast:
With Lumigo users can:
Get started with a free trial of Lumigo for your microservice applications