Kubernetes Monitoring: Guide to Metrics, Tools, and Best Practices

  • Topics

What Is Kubernetes Monitoring? 

Kubernetes monitoring involves collecting, analyzing, and visualizing data about the health and performance of a Kubernetes cluster, its nodes, and its containerized workloads. This data is used to ensure that the cluster and its applications are running smoothly, to identify and troubleshoot issues, and to optimize security, performance, and resource utilization. This is part of an extensive series of guides about performance testing.

Why Is Kubernetes Monitoring Important? 

Kubernetes monitoring is important for several reasons:

  • Ensure the health of the cluster: Monitoring the health of a Kubernetes cluster, including its nodes, pods, and services, is essential to ensure that applications are running as intended.
  • Troubleshoot issues quickly: Monitoring Kubernetes clusters and workloads helps operators quickly identify and troubleshoot issues as they arise, which can help minimize downtime and ensure a good user experience.
  • Optimize resource utilization: Kubernetes monitoring processes can provide valuable insights into the resource usage of the cluster and its workloads. These insights can help optimize resource allocation and improve efficiency.
  • Enhance security: Monitoring can help detect security issues and identify vulnerabilities in clusters and workloads. This information can help prevent attacks and improve the overall security posture. 
  • Facilitate capacity planning: By monitoring Kubernetes clusters and workloads, operators can gain insights into usage patterns and plan for future capacity needs, which can help avoid performance issues and ensure that the system can handle increasing loads.

Kubernetes Monitoring vs. Kubernetes Observability vs. Kubernetes Debugging 

Kubernetes monitoring, Kubernetes observability, and Kubernetes debugging are three distinct but related concepts in the context of Kubernetes.

Kubernetes monitoring refers to the process of tracking the health and performance of a Kubernetes cluster and its components. The goal of monitoring is to ensure that the cluster is running smoothly and efficiently, and to identify and resolve any issues that may arise. This is typically done using tools such as Prometheus and Grafana, which collect and analyze metrics from the cluster and its components.

Kubernetes observability refers to the ability to understand the behavior and state of a Kubernetes cluster and its components. It includes monitoring, but goes beyond it by providing deeper visibility into the inner workings of the cluster. This is typically done using tools such as OpenTracing, Jaeger, and Zipkin, which provide distributed tracing and logging capabilities.

Kubernetes debugging refers to the process of troubleshooting issues in a Kubernetes cluster and its components. This can include analyzing logs and metrics, tracing requests, and using tools such as kubectl and kubeadm to diagnose and resolve issues. Debugging is typically done when issues arise and requires a deeper understanding of the cluster’s behavior and state.

Key Kubernetes Metrics to Monitor 

Monitoring metrics at the cluster, node, deployment, and pod levels can provide important insights into the health and performance of the system, allowing operators to identify and troubleshoot issues quickly and optimize resource utilization.

Kubernetes Cluster & Node Metrics

Kubernetes cluster metrics provide an overview of the health and performance of the cluster as a whole. Some important cluster-level metrics to monitor include:

  • Cluster CPU and memory usage
  • Number of nodes in the cluster
  • Number of pods in the cluster
  • Number of containers running in the cluster
  • API server latency and error rate

Kubernetes node metrics provide insights into the health and performance of individual nodes in the cluster. These metrics help you identify any bottlenecks or issues with specific nodes in the cluster. Some of the key node metrics include:

  • CPU and memory usage of each node
  • Network throughput and latency
  • Disk usage and I/O operations
  • Number of pods scheduled on each node
  • Node uptime and availability

Kubernetes Deployment and Pod Metrics

In addition to monitoring cluster and node metrics, it’s also important to keep an eye on Kubernetes deployment and pod metrics. These metrics can provide insights into the health and performance of your Kubernetes applications and help you identify any issues that may arise. Here is an overview of some key metrics:

Deployment metrics

These metrics are specific to Kubernetes deployments, which are used to manage the rollout and scaling of containerized applications. Some key deployment metrics to monitor include:

  • Number of available replicas: Indicates the number of replicas (i.e., instances) of the application that are currently available and running.
  • Number of desired replicas: Indicates the number of replicas that should be running based on the deployment’s configuration.
  • Deployment status: Indicates whether the deployment is currently rolling out, rolling back, or has successfully completed its rollout.
  • Deployment progress: Indicates the progress of the deployment rollout, as a percentage of the desired replicas that are running.

Pod metrics

Pods are the smallest deployable units in Kubernetes, and they contain one or more containers. Monitoring pod metrics can help you identify any issues with individual containers or applications. Some key pod metrics to monitor include:

  • CPU and memory usage: High usage can indicate that an application is under-resourced or that there are resource constraints on the node.
  • Container restarts: Indicates how many times a container has been restarted. Frequent restarts can indicate issues with the container’s configuration or resource usage.
  • Pod status: Indicates whether a pod is running, pending, or has terminated. Pods in the pending state may indicate that there are resource constraints on the node or that the cluster is under-resourced.

Kubernetes Monitoring Challenges

Monitoring a Kubernetes cluster can be challenging due to the distributed nature of the platform and the dynamic environment in which it operates. Here are some of the key challenges that organizations may face when monitoring a Kubernetes cluster:

  • Complexity: Kubernetes is a complex platform with many moving parts, making it difficult to monitor all of the different components and understand how they are connected. There are multiple layers of abstraction in Kubernetes, such as pods, services, and nodes, which can make it challenging to troubleshoot issues when they arise.
  • Scalability: Kubernetes is designed to be highly scalable, which means that monitoring tools must also be able to scale with the platform. This can be a challenge for organizations that are not used to dealing with such large-scale environments.
  • Real-time monitoring: Kubernetes clusters are dynamic and constantly changing, which means that real-time monitoring is crucial for detecting and resolving issues quickly. Traditional monitoring tools may not be able to keep up with the pace of change in a Kubernetes environment.
  • Metrics overload: Kubernetes generates a large number of metrics, which can be overwhelming for monitoring tools and operators. It can be difficult to identify which metrics are most important and relevant for a specific use case.
  • Security: Monitoring a Kubernetes cluster requires access to sensitive information, such as API tokens and configuration files. Ensuring the security of these assets can be a challenge, especially in multi-tenant environments.

To overcome these challenges, organizations can use monitoring tools that are specifically designed for Kubernetes environments. These tools should be able to handle the complexity and scale of the platform, provide real-time monitoring, and offer customizable dashboards to help operators focus on the most important metrics. Additionally, implementing proper security measures and access controls is crucial to ensure the security of the Kubernetes environment.

Best Kubernetes Monitoring Tools 

Kubernetes Dashboard

The Kubernetes dashboard is a web-based graphical user interface (GUI) that allows you to manage, monitor, and troubleshoot Kubernetes clusters. It provides a convenient way to view and manage Kubernetes resources, such as deployments, services, and pods, without having to use the command-line interface (CLI).

The Kubernetes dashboard is included with Kubernetes by default, and it can be installed and accessed from the Kubernetes master node. Once installed, you can access the dashboard from a web browser, allowing you to view detailed information about your Kubernetes cluster and perform various tasks, such as scaling deployments, creating new resources, and managing the configuration of your applications.

Some of the key features of the Kubernetes Dashboard include:

  • Cluster overview: The dashboard provides an overview of your Kubernetes cluster, showing the number of nodes, pods, and services, as well as the current usage of CPU and memory.
  • Resource management: You can manage Kubernetes resources, such as deployments, services, and pods, from the dashboard. You can create, edit, and delete resources, and view detailed information about each resource.
  • Application monitoring: The dashboard enables you to monitor the status and performance of applications running on Kubernetes. You can view logs and metrics, troubleshoot issues, and configure alerts.
  • Customizable views: The dashboard provides customizable views, allowing you to create and save your own dashboards with the metrics and information that are most important to you.


Prometheus is an open-source monitoring system and time series database that is widely used to monitor containerized applications and infrastructure. It was originally developed at SoundCloud and later donated to the Cloud Native Computing Foundation (CNCF).

Prometheus is designed to collect and store time-series data, allowing you to monitor and analyze performance metrics, such as CPU and memory usage, request latency, and network throughput. 

Some key features of Prometheus include:

  • Flexible data model: Prometheus uses a flexible data model that allows you to represent complex, multi-dimensional data in a simple and consistent way. This makes it easy to query and analyze your metrics.
  • Powerful query language: Prometheus has a powerful query language that allows you to filter and aggregate metrics, as well as perform calculations and create derived metrics.
  • Scalability: Prometheus is designed to be highly scalable and can handle millions of metrics from thousands of servers. It uses a pull-based model, where agents periodically scrape metrics from endpoints, making it easy to add and remove servers as needed.
  • Alerting: Prometheus includes a powerful alerting system that allows you to set up and configure alerts based on predefined conditions. You can also create custom alert rules using the query language.
  • Integration: Prometheus integrates with a wide range of tools and services, including Grafana, Kubernetes, and Docker. This makes it easy to monitor containerized applications and infrastructure.

EFK Stack 

The EFK Stack is a collection of open-source tools used for logging and analyzing data in Kubernetes clusters. The acronym EFK stands for ElasticSearch, Fluentd, and Kibana.

Here’s a brief overview of each component:

  • Elasticsearch: A distributed search and analytics engine that can be used to store and search logs. It allows you to index and search large volumes of data quickly and efficiently.
  • Fluentd: A log forwarding and aggregation tool that can collect log data from different sources and send it to Elasticsearch for indexing and storage. Fluentd is designed to be highly scalable and can handle large volumes of log data.
  • Kibana: A web-based visualization tool that can be used to explore and analyze log data stored in Elasticsearch. It allows you to create custom dashboards and visualizations to help you better understand your log data.

Together, the EFK Stack provides a toolset for collecting, indexing, and analyzing log data in Kubernetes clusters. It can help you gain insights into the performance and health of your applications, as well as troubleshoot issues when they arise. 


cAdvisor (short for Container Advisor) is an open-source agent that runs as a daemon on each node in a Kubernetes cluster, and provides detailed information about the resource usage and performance of containers running on that node. 

cAdvisor is capable of collecting a wide range of container metrics, including CPU usage, memory usage, network bandwidth, and I/O statistics, among others. It can also provide detailed information about the file system usage and network connections of individual containers.

Some key benefits of using cAdvisor in a Kubernetes cluster include:

  • Real-time data collection: cAdvisor provides real-time data about container performance, allowing operators to quickly respond to issues as they arise.
  • Scalability: cAdvisor is designed to be highly scalable, allowing it to collect and store metrics for a large number of containers running on a node or across a cluster.
  • Integration with Kubernetes: cAdvisor is tightly integrated with Kubernetes, making it easy to deploy and configure on each node in a cluster.

Learn more in our detailed guide to Kubernetes monitoring tools (coming soon)

Kubernetes Monitoring Best Practices 

Stop Measuring Individual Containers

Instead of measuring individual containers, it’s important to focus on the overall health and performance of the Kubernetes cluster. This means monitoring metrics such as CPU and memory usage, network throughput, and disk I/O at the cluster and node levels. This approach provides a more holistic view of the Kubernetes environment, allowing you to detect issues that may affect multiple containers or applications. 

Ensure Data Consistency Across Your Layers

It’s important to ensure that the metrics and logs collected from your Kubernetes environment are consistent across all layers, from the container to the node to the cluster. This helps to avoid discrepancies and ensure that you are getting an accurate view of your environment. Using a centralized logging and monitoring solution can help to ensure consistency and avoid duplication of effort.

Track the API Gateway for Microservices

When monitoring microservices-based architectures, it’s important to track the API gateway. The API gateway is the entry point for all requests to the microservices, so monitoring it can help to automatically detect issues that may be affecting multiple microservices. By monitoring the API gateway, you can quickly identify issues that may be causing application performance issues and take action to resolve them. 

Use Ready-made Dashboards and Alerts

Many monitoring tools offer out-of-the-box dashboards and alerts that are specifically designed for Kubernetes environments. These dashboards and alerts can provide valuable insights into the performance and health of your Kubernetes environment, and can help to quickly identify and resolve issues. By using pre-built dashboards and alerts, you can save time and effort while still getting a comprehensive view of your environment.

Learn more in our detailed guide to Kubernetes monitoring best practices (coming soon)

Kubernetes Monitoring and Troubleshooting with Lumigo

Lumigo is a troubleshooting platform, purpose-built for microservice-based applications. Developers using Kubernetes to orchestrate their containerized applications can use Lumigo to monitor, trace and troubleshoot issues fast. Deployed with zero-code changes and automated in one-click, Lumigo stitches together every interaction between micro and managed service into end-to-end stack traces. These traces, served alongside request payload data, give developers complete visibility into their container environments. Using Lumigo, developers get:

  • End-to-end virtual stack traces across every micro and managed service that makes up a serverless application, in context
  • API visibility that makes all the data passed between services available and accessible, making it possible to perform root cause analysis without digging through logs 
  • Distributed tracing that is deployed with no code and automated in one click 
  • Unified platform to explore and query across microservices, see a real-time view of applications, and optimize performance

To try Lumigo for Kubernetes, check out our Kubernetes operator on GitHub.

See Additional Guides on Key Performance Testing Topics

Together with our content partners, we have authored in-depth guides on several other topics that can also be useful as you explore the world of performance testing.

Lambda Performance

Authored by Lumigo

Datadog APM

Authored by Coralogix

Prometheus Monitoring

Authored by Tigera

Debug fast and move on.

  • Resolve issues 3x faster
  • Reduce error rate
  • Speed up development
No code, 5-minute set up
Start debugging free