Metrics are a key component of OpenTelemetry, an open-source project designed for generating, collecting, and exporting telemetry data from distributed applications. It offers developers and operation teams an edge in observing and understanding application performance and behavior, especially in cloud-native ecosystems.
OpenTelemetry metrics provide a consistent and unified method to capture and export performance and usage data. They are engineered to function at high performance and support multiple metric data types, which can be used to monitor virtually every aspect of your systems—whether it’s monitoring your application’s response time, memory usage, or error rates.
In this article
Metrics are the backbone of system monitoring. They provide quantitative data about the state and performance of your system and applications. Metrics allow you to measure, track, benchmark and analyze various aspects of your systems, providing critical insights for decision-making and problem-solving.
System monitoring without metrics would be like navigating through a dense forest without a compass. Without metrics, you are blind to a system’s performance and health. With metrics, you can track your system’s performance over time, identify trends, detect anomalies, and troubleshoot issues effectively.
Moreover, metrics can support data-driven decisions. Whether you’re scaling your system, optimizing performance, or diagnosing problems, metrics provide the hard data you need to make informed decisions. With OpenTelemetry metrics, you can capture and analyze these critical metrics in a standardized, efficient, and scalable way.
Learn more in our detailed guide to OpenTelemetry architecture
Metric instruments are the primary tools used by OpenTelemetry to capture telemetry data. They are designed to be lightweight and efficient, allowing them to capture data with minimal impact on system performance.
There are several types of metric instruments available, including counters, gauges, and histograms (described in more detail below). Metric instruments provide the raw data that forms the foundation of all telemetry analysis. They are highly flexible and can be customized to capture a wide range of data points.
Metric events represent the raw telemetry data captured by metric instruments. They are the basic unit of information in OpenTelemetry metrics, recording specific instances of events or measurements.
Each metric event consists of a timestamp, a value, and a set of attributes. The timestamp indicates when the event occurred, the value represents the data captured by the metric instrument, and the attributes provide additional context about the event. For example, a metric event might record the number of requests received by a web server at a particular time, along with the IP address of the server and the type of requests.
Metric events are highly granular, providing a detailed view of the behavior of a system. They can be aggregated and analyzed in various ways to provide insights into trends, patterns, and anomalies, making them a powerful tool for debugging and performance tuning.
Aggregations are the means by which OpenTelemetry metrics process and interpret metric events. They take the raw data from metric events and transform it into a form that can be easily understood and analyzed.
There are several types of aggregations available, each suited to different types of data and analysis. These include sum aggregations, which add up the values of a series of metric events, and histogram aggregations, which group metric events into bins based on their values. There are also min-max-sum-count aggregations, which provide a statistical summary of a series of metric events, and sketch aggregations, which provide an approximate representation of the distribution of metric events.
Counters are a type of metric that can only increase over time. They’re ideal for tracking the number of events that occur in your system. For example, you could use a counter to track the number of requests that your service handles, the number of errors that occur, or the number of tasks that are completed.
A key aspect of counters is that they’re cumulative. This means that they keep a running total of the events they’re tracking. This makes counters particularly useful for tracking rates. By comparing the value of a counter at two different points in time, you can calculate the rate at which the events are occurring.
Gauges, on the other hand, are a type of metric that can increase or decrease over time. They’re perfect for tracking values that can go up and down, like the amount of memory being used by your application, the number of active connections to your service, or the current CPU usage.
Unlike counters, gauges are not cumulative. They simply reflect the current value at any given moment. This makes gauges ideal for monitoring the current state of your system. They can give you a real-time snapshot of what’s happening, allowing you to respond quickly to changes.
Histograms are a type of metric that provides a statistical summary of a set of values. In the context of OpenTelemetry, histograms are used to record the distribution of values for a particular event or operation. For instance, if you want to measure the response time of a server, you could use a histogram to capture the distribution of response times. Histograms can provide valuable insights into the behavior of your system, helping you to identify anomalies and optimize performance.
Histograms in OpenTelemetry are represented as a collection of buckets. Each bucket represents a range of values, and the count of items falling within that range. This structure allows for a concise representation of a distribution, enabling efficient analysis of large data sets.
Summaries are a type of metric that provides a summary of a set of observed values. They are similar to histograms in that they provide a statistical summary of a data set. However, unlike histograms, summaries do not provide a detailed distribution of values. Instead, they provide a snapshot of the data at a particular point in time, including the minimum, maximum, mean, and quantile values.
Summaries in OpenTelemetry can be used to measure various aspects of a system, such as response times, throughput, and latency. By providing a high-level overview of these metrics, summaries can help developers quickly identify potential issues and areas for optimization.
The following best practices can help you implement and make better use of OpenTelemetry metrics.
When instrumenting your application with OpenTelemetry metrics, it’s essential to add context to your metrics. Context refers to additional information that can help you understand the circumstances under which a metric was recorded. This might include information about the environment, the user, or the specific operation being measured. By adding context to your metrics, you can gain a deeper understanding of your system’s behavior and performance.
Sampling is an important aspect of metric collection. It involves selecting a subset of data from a larger data set for analysis. Efficient sampling can help you reduce the data volume and computational overhead associated with metric collection, without sacrificing the quality of your insights.
In the context of OpenTelemetry, efficient sampling might involve using probabilistic sampling methods, which are designed to provide a representative sample of a data set with minimal computational overhead. Alternatively, it might involve using adaptive sampling methods, which adjust the sampling rate based on the observed data.
Aggregating data at the source can significantly reduce the volume of data that needs to be transmitted and stored. This can lead to substantial savings in terms of network bandwidth and storage costs. In addition, by aggregating data at the source, you can reduce the computational overhead associated with processing raw data.
In the context of OpenTelemetry, this might involve using the OpenTelemetry SDK to aggregate metrics at the client side before sending them to the backend. This can include operations such as summing, averaging, or computing quantiles over a set of metric values.
OpenTelemetry is not just about metrics. It also provides support for logging and tracing, which are other essential aspects of application observability. By integrating your metrics with logging and tracing, you can gain a more holistic view of your system’s behavior and performance.
For instance, you might use metrics to identify a spike in latency, and then use traces to pinpoint the specific operations that are causing the delay. Similarly, you might use logs to provide additional context for your metrics, such as the specific error messages associated with a high error rate.
Metric cardinality refers to the number of unique combinations of metric names and label values. High cardinality can lead to a large volume of data, which can be challenging to manage and analyze. Therefore, it’s important to limit your metric cardinality to a manageable level.
In the context of OpenTelemetry, this might involve using a limited set of labels for your metrics, or using a consistent naming convention for your metric names. By limiting your metric cardinality, you can make your metrics easier to manage and analyze, and you can reduce the computational overhead associated with metric collection.
Learn more in our detailed guide to OpenTelemetry collector
Microservices Monitoring with Lumigo
OpenTelemetry offers a pluggable architecture that enables you to add technology protocols and formats easily. Using OpenTelemtry, Lumigo provides containerized applications with end-to-end observability through automated distributed tracing.