Essential Metrics for Kafka Performance Monitoring

Home Blog Essential Metrics for Kafka Performance Monitoring

Apache Kafka is an open-source distributed streaming system that has grown in popularity and usage across the technology industry. Originating from LinkedIn and now part of the Apache Software Foundation, Kafka provides a robust and scalable platform. It’s uniquely designed with an architecture that includes both a storage layer and a compute layer. This dual-layer system enables efficient real-time data ingestion, allowing organizations to establish seamless streaming data pipelines across vast and complex distributed systems.

What sets Kafka apart, beyond its technical capabilities, is its adaptability driven by its open-source nature. This openness has allowed developers around the globe to modify, adapt, and expand upon its original design. As a result, Kafka has seen widespread adoption, evolving into a critical component of the digital infrastructure at many planet-scale companies. Its ability to handle massive data streams in real-time positions it as an indispensable tool for application deployments.

The Importance of Kafka Monitoring 

Kafka prides on being a highly available and scalable technology, capable of handling high throughput at low latency. In order to maintain these qualities, Kafka clusters need to be monitored and maintained at the same level of performance.

The chief reasons to monitor your clusters would be: 

  • One of the primary reasons for monitoring Kafka clusters is to identify issues before they impact your operations. By continuously monitoring key metrics such as broker performance, topic throughput, and consumer lag, you can catch potential problems early and take corrective actions. This proactive approach helps in minimizing downtime and maintaining high availability.
  • To ensure the health and performance of the cluster. Kafka clusters are complex systems, and there are many things that can go wrong. By monitoring the cluster, you can identify and address problems early on before they cause outages or performance degradation.
  • To optimize the cluster for performance and cost. By monitoring the cluster, you can identify areas where the cluster can be optimized. This can lead to improved performance and reduced costs.
  • To meet compliance requirements. Many organizations are required to comply with regulations that require them to monitor their IT systems. Monitoring Kafka clusters can help organizations to meet these compliance requirements.

Key Metrics for Monitoring Kafka 

Kafka boasts of a robust foundation of metrics upon which a solid strategy for monitoring the clusters can be formed. Kafka’s default metrics are key performance indicators and statistics that provide insights into the health, performance, and behavior of any Kafka cluster.

Monitoring these metrics helps ensure that Kafka infrastructure is running smoothly and efficiently. 

Broker Metrics

Kafka brokers play a pivotal role in data transportation and replication across the cluster. Ensuring their optimal performance and health is paramount.

Reasons to Monitor

  • Gauge broker health and diagnose any potential issues.
  • Optimize the overall performance and resilience of the Kafka cluster.

Critical Broker Metrics

  • Replicated byte rate: Monitors data replication efficiency.
  • Produced and Consumed byte rate: Helps in assessing data throughput.
  • CPU, Memory, and Disk I/O usage: Vital indicators of resource constraints or potential bottlenecks.
  • Offline Partitions and Leader Election Rates: Key for evaluating the stability of your data flow and potential cluster leadership issues.

Topic Metrics

Topics in Kafka are data streams to which messages are published. Their health directly correlates with the reliability of your data flow.

Reasons to Monitor

  • Understand topic health and ensure data consistency.
  • Identify replication issues and maintain data availability.

Critical Topic Metrics:

  • Number of messages produced/consumed: Reflects data flow velocity.
  • Replication factor and In-sync replicas: Crucial for data redundancy and fault tolerance.
  • Under-replicated partitions: An early indicator of replication issues.

Producer Metrics

Producers push data into Kafka topics. Their efficiency directly affects the timeliness and reliability of data ingestion into Kafka.

Reasons to Monitor

  • Ensure efficient data ingestion.
  • Detect and rectify any potential data production bottlenecks.

Critical Producer Metrics

  • Number of messages produced: Monitors data input rate.
  • Producer latency: A measure of data input delay.
  • Producer retries and errors: Key indicators of issues in data publishing.

Consumer Metrics

Consumers pull data from Kafka topics. Efficient consumption ensures timely data availability for downstream applications.

Reasons to Monitor

  • Assess data consumption health.
  • Detect lags or delays in data processing.

Critical Consumer Metrics

  • Number of messages consumed: Offers insights into consumption rates.
  • Consumer latency and lag: Indicators of delays or backlogs in data processing.

What are good Strategies for Kafka Monitoring? 

In addition to the basic producer and consumer metrics listed above, there are a number of other metrics that you can monitor to get a more complete picture of the health and performance of your Kafka producers and consumers. For example, you can monitor the following metrics:

Producer metrics:

  • Number of producer batches
  • Number of producer requests
  • Number of producer timeouts

Consumer metrics:

  • Number of consumer groups
  • Number of consumer threads
  • Number of consumer commits
  • Number of consumer offsets

Scaling and capacity planning goes a long way in keeping Kafka clusters performant. As your Kafka usage grows, closely monitor performance under load. Be prepared to scale your cluster by adding more brokers or partitions when necessary. Monitoring helps you identify when it’s time to scale, ensuring your Kafka infrastructure can handle increasing data volumes. It’s essential to retain historical monitoring data for trend analysis and capacity planning. Historical data helps you identify long-term performance trends, predict resource requirements, and make informed decisions about scaling your Kafka cluster.

Using Lumigo 1-click OpenTelemetry deployment for Kafka

No Code Changes Required: Lumigo’s primary advantage is its ability to integrate without requiring any code alterations. Your current Kafka setup remains untouched, saving time and effort.

Swift Setup with OpenTelemetry: Lumigo’s Java Distribution leverages the power of OpenTelemetry. In mere minutes, you can have an end-to-end Kafka monitoring setup, ready to provide real-time insights.

Enhanced Visibility: Dive deep into individual metrics with Lumigo, ensuring a 360-degree view of your Kafka clusters. With such granularity, it becomes easier to identify and address potential issues before they escalate.

Proactive Alerts: Stay ahead of anomalies with Lumigo’s real-time alerts and notifications. Rather than being reactive, Lumigo ensures you’re always a step ahead, identifying and rectifying irregularities.

Setting Up Lumigo’s Java Distribution

Using the Lumigo Java Distribution, you can quickly gain detailed insights into your Kafka operations. It’s important to highlight the Lumigo distributions ability to integrate with minimal effort, eliminating the need for code modifications. Additionally the distro is built on Industry standard OpenTelemetry and is designed to not trade off ease of deploy with collated insights.


Secure the latest version from the Lumigo Java Distro Releases page

Environment Configuration:

Set the LUMIGO_TRACER_TOKEN environment variable with the unique token from your Lumigo account. This can be retrieved from the Lumigo platform under Settings –> Tracing –> Manual tracing. Replace <token> with the relevant value:


It’s also recommended to set the OTEL_SERVICE_NAME environment variable, defining the service name for your application. This is name for your lumigo monitored application, which will be visible and available within your Lumigo instance:

     OTEL_SERVICE_NAME=<service name>

Integration Options

Option 1: JAVA_TOOL_OPTIONS, Preferred for Containerized Applications

Set the JAVA_TOOL_OPTIONS environment variable within your environment and reference the jar from the download above:

export JAVA_TOOL_OPTIONS=”-javaagent:<path-to-lumigo-otel-javaagent>”

Option 2: Command-line Parameters

Invoke the -javaagent property during startup, referencing the downloaded distro:

     java -javaagent:<path-to-lumigo-otel-javaagent> -jar app.jar

Upon deployment, trace data will immediately begin populating your Lumigo Dashboard. This proves invaluable, especially with Kafka, which commonly serves as the messaging bridge connecting various components of application deployments. With Lumigo, you gain deeper insights, enabling you to visualize end-to-end tracing across a multiple of services within a single invocation. To find out more about monitoring Kafka using Lumigo, see the blog post on Auto-Instrumenting OpenTelemetry for Kafka.

Kafka Monitoring is Pivotal to Deployment

While Apache Kafka’s prowess and adaptability are commendable for real-time data streaming, the true key to harnessing Kafka’s immense potential hinges on vigilant monitoring. Every deployment that integrates Kafka as part of its infrastructure needs to recognize the indispensable role of monitoring in maintaining system health, preempting issues, and ensuring optimal data flow.

Effective monitoring is not a luxury; it’s an essential component of Kafka management. Lumigo’s 1-click OpenTelemetry deployment provides an effortless path to this, Sign up for a free Lumigo account and eliminate the complexities of setup and ensuring that you are always ahead of potential pitfalls. After all, the real power of a system isn’t just in its creation, but in how we manage and optimize its function.