• Guide Content

What Should You Observe When Deploying Microservices?

Microservices have compelling benefits for development organizations. They make applications easily scalable, highly resilient, and easier to maintain and update. However, with separate services distributed across different hosts, keeping track of dozens or even hundreds of microservices can be challenging.

With greater scale and complexity comes a greater need for observability. There are many potential points of failure and constant updates in a microservices architecture, which cannot be addressed by traditional monitoring solutions. The many unknown, dynamic factors in a distributed environment make it necessary to build observability into the system by design.

Knowing what runs in production is important for keeping delivery cycles short and preventing downtime and other issues. Observability mechanisms provide visibility into the distributed system to help developers understand their application’s performance. Observability offers the necessary control to identify and address issues quickly.

The Three Pillars of Observability

Achieving observability requires implementing three data classes known as the “pillars” of observability:

  • Logs – a log is a written record of a specific event, describing what happened and when. Logs contain details such as timestamps and payloads to provide important context for analysis. There are three types of logs: binary, structured, and plaintext logs. Plaintext logs are the most widely used, although structured logs are gaining popularity due to their addition of queryable metadata. Logs are usually the first go-to resource when investigating a system issue.
  • Metrics – a metric is typically a numeric value tracked over time, used to measure a system’s state or performance. Metrics include attributes like names, timestamps, and KPIs to provide context. Metrics differ from logs because they have a default structure and are easy to optimize for storage. They are also easy to query and allow analysts to track changes to a specific element over time.
  • Traces – a trace is the mapped journey of a given request across a distributed system. It encodes relevant data for each operation performed on the request (or “span”) as it moves through the system. A trace may include one or several spans, allowing you to track the course of a request through the microservices system to locate bottlenecks of the cause of a failure.

Using these three data classes does not guarantee observability, especially if you use them in isolation or with separate tools. However, integrating them into a unified monitoring solution can help you enhance your control over your microservices infrastructure. As part of a comprehensive observability strategy, logs, metrics, and traces can help you identify issues, understand why they occur, and address them quickly.

Related content: Read our guide to cloud native monitoring (coming soon).

Microservices Observability Patterns

1. Distributed Tracing

Distributed tracing provides insight into what the application is doing across multiple connected services. A distributed tracer is similar to a performance profiler for a monolithic application. It records information about service calls made while processing a request, showing the different services involved, how they interact, and how much time each service spends handling external requests.

In distributed tracing, each external request is assigned a unique ID and tracked on a central server, enabling visualization and analysis of requests and the associated data flows.

Related content: Read our guide to microservices tracing (coming soon).

2. Health Check API

In some cases, a service is “unhealthy”, meaning it is running, but cannot process requests. For example, a service instance that was recently started might still be initializing, or a service might have frozen due to a software bug without crashing completely. Another example of a service that is running but not functioning properly is a service that lost access to its database, or the database is overloaded and not accepting connections.

Traffic should not be rerouted to a service instance unless it is running and ready to serve HTTP/HTTPS requests. Services should have an automated recovery process, and if this process fails, the service instance must be terminated and recreated. A service instance must be able to notify the deployment infrastructure if it can handle requests or not.

3. Log Aggregation

Logging is essential for effective troubleshooting (and for other purposes, such as security and auditing). Logging can be difficult to achieve in a microservices architecture, because log entries are spread across multiple services, each with its own log files. To make things worse, service instances might be ephemeral, and when they shut down, log files are lost.

Log aggregation is the solution. A log aggregation pipeline sends logs from all service instances to a centralized log server. The log server stores the logs and is responsible for aggregating them, storing them, and making them accessible and searchable. Via the central logging server, teams can retrieve, visualize, and analyze logs, and can define alerts triggered when specific messages or patterns appear in the logs.

4. Auditing

Auditing is an essential activity in environments that have strict security or compliance requirements, and is also important for change management and other activities. Auditing means that the actions of each user or service account are recorded in an audit log.

Audit log entries record a user’s identity, actions taken by the user, and related business objects. Audit logs are usually stored in database tables.

In a microservices environment, there are several ways to implement auditing, including but not limited to:

  • Adding audit log to business logic – each service method can generate an audit log entry and store it in a database.
  • Using event sourcing – the event sourcing pattern ensures every change to application state is captured in an event object, making it easy to see the history and roll back to previous states. If an application uses event sourcing, event objects are an inherent form of auditing.

5. Exception Tracking

Errors may occur while microservices process a request. If the service is well programmed, the service instance will throw an exception. These exceptions can include errors, code messages, and stack traces.

Exceptions are critical for observability. As a general rule, all exceptions should be logged, and developers should be notified when exceptions occur in their code, so that they can investigate and discover the root cause.

6. Application Metrics

Application monitoring systems collect metrics from all parts of the technology stack and provide information about application health. Metrics can include:

  • Infrastructure-level metrics such as CPU, memory, and disk utilization
  • Application-level metrics such as service request latency and number of requests
  • End-user metrics such as application load times

In a microservices environment, application metrics are the responsibility of service developers. They need to instrument the service to gather metrics about its behavior. In addition, they need to expose metrics to a central metric server to make them accessible and useful.

Microservices Observability Challenges and Solutions

Here are some of the major challenges for observability in microservices and how to address them.

Identifying the Root Cause of Distributed Errors

Tracing requests and data in a microservices application with many micro-apps is a major challenge. The system routes requests via various applications in convoluted journeys, making troubleshooting and debugging harder.

Developers can apply various techniques to help track requests throughout your project’s lifecycle. You can implement open source technologies like OpenTracing and Zipkin to identify and monitor bottlenecks in the delivery pipeline.

Large Data Volumes

The combined logs, metrics, and traces of your observability solution can generate an overwhelming volume of data. While intended to provide a clear view of each service, the data’s scale and detail can be too much to manage. Even if you collect this data automatically, it might create a bottleneck when the time comes to process it.

DevOps advances offer effective solutions to data overload. You can leverage an orchestration platform to help manage your microservices project’s deployment, scaling, and other aspects.

Latency and Reliability Issues

Breaking down a system into a microservices project can affect its overall reliability. Microservices systems can also introduce more latency issues than a monolithic system because the handful of instances experiencing latency can add up, impacting the overall project.

You can use a software intelligence platform to help prevent these issues. It can automatically identify components and dependencies and assess their behavior to determine if they are functioning correctly. A software intelligence solution also helps identify the main cause of the issue.

Microservices Observability with Lumigo

Lumigo is a cloud native observability tool, purpose-built to navigate the complexities of microservices. Through automated distributed tracing,

Lumigo is able to stitch together the distributed components of an application in one complete view, and track every service of every request.

Taking an agentless approach to monitoring, Lumigo sees through the black boxes of third parties, APIs and managed services,

With Lumigo users can:

  • See the end-to-end path of a transaction and full system map of applications
  • Monitor and debug third party APIs and managed services (ex. Amazon DynamoDB, Twilio, Stripe)
  • Go from alert to root cause analysis in one click
  • Understand system behavior and explore performance and cost issues
  • Group services into business contexts
  • Get started with a free trial of Lumigo for your microservice applications

 

 

 

 

 

 

 

Cloud-native monitoring is the process of instrumenting a cloud-native application to collect, aggregate, and analyze logs, metrics, distributed traces and other telemetry. The goal is to better understand application behavior. Logs, metrics and distributed traces are often necessary to get a full picture of a cloud native system. Distributed tracing becomes more important as an application becomes more distributed.

Cloud-native monitoring is often referred to as microservices monitoring, because cloud-native applications are commonly built in a microservices architecture, with each component operating as an independent, decoupled microservice that interacts with others over the network and shared services.

Monitoring can involve a broad range of activities, from keeping track of specific system properties on a host, such as CPU utilization, storage space, and memory consumption, to detailed analysis of distributed requests served by multiple components and how failures spread among them.

One of the main differentiations between cloud-native environments and more traditional environments that affect monitoring is that many cloud-native components are ephemeral—they are frequently created and destroyed. Therefore, it is not always possible to tie monitoring to specific resource names, and monitoring systems must have a strategy for collection of logs from distributed components to perform central storage and analysis.

In this article:

  • What is Cloud-Native Monitoring?
  • Why Is Cloud-Native Monitoring Important?
  • Cloud-Native Monitoring: What Should We Monitor?
    • Latency
    • Traffic
    • Error Rate
    • Saturation
  • 5 Cloud Native Monitoring Best Practices
    • 1. Embrace Distributed Tracing
    • 2. Leverage Automation
    • 3. Configure Alerts Correctly
    • 4. Prioritize Alerts
    • 5. Create Specialized Dashboards
  • Cloud-Native Monitoring with Lumigo

Why is Cloud Native Monitoring Important?

IT environments have steadily become more complex. The growth of cloud computing and hybrid environments, the proliferation of nodes, endpoints, and technology stacks, additional levels of abstractions used in architectures and the growing use of containerized and serverless architectures. Visibility over IT resources has become a major challenge, and debugging complex distributed applications is time-consuming and frustrating, especially while an outage is ongoing.

Cloud-Native Monitoring: What Should We Monitor?

According to Google’s SRE handbook, the following four key metrics are the most important to evaluate system performance and health: latency, traffic, error rate, and saturation.

Latency

Latency is the time it takes for a service or system to respond to a request. It covers the journey of sending a request through the network, processing it, and returning a response. Pay attention to error latency—failed responses can be unpredictably time-consuming, both in terms of longer-than-expected responses, e.g., when timeouts are involved, as well as fail-fast responses, e.g., in case of malformed input.

Traffic

Traffic is a measure of the load served by a system. There are several ways to define and measure traffic depending on the system: for example, traffic in a database-specific system is the number of database transactions per second, while the amount of requests served is a good measure of traffic for web server-like applications.

Error Rate

The error rate is the number of requests that fail—there are various types of failure, including explicit failures, undesired responses, and slow responses. Monitoring errors is often challenging, given the complexity of different failure types. Error tracking is a form of monitoring that collects environmental data to identify the causes of errors. Understanding errors is important for maintaining an adequate level of service to end-users.

Saturation

Saturation is the extent to which a system is full. This metric measures the fraction of memory or CPU used, indicating the proportion of processing bandwidth consumed continuously. Setting a saturation target is important because system performance depends on changing resource utilization patterns. Monitoring saturation helps determine workload targets that reflect real-world demands.

5 Cloud Native Monitoring Best Practices

Here are some important best practices for monitoring your cloud-native deployments.

1. Embrace Distributed Tracing

Cloud-native architectures are more complex than traditional application environments, consisting of distributed systems made of many moving parts, often from multiple teams and written in a variety of languages. Being able to pinpoint quickly and accurately where errors originate and how they spread to the end users is key to detecting and solving issues quickly.

Distributed tracing is a monitoring technique that has come to the forefront with cloud-native applications due to their innate distribution and the complexity therein. In a nutshell, distributing tracing consists of collecting across all components, a “trace” that describes what each component does to serve a specific request. Think of it as a distributed log ledger, with each of the components of your application adding to the history of a request.

OpenTelemetry, a project under the umbrella of the Cloud Native Computing Foundation (CNCF), is quickly rising as the de-facto standard for distributed tracing, being increasingly integrated in open-source and commercial projects alike.

Related content: Read our guide to OpenTelemetry

2. Leverage Automation

Automate all tasks possible, as this will help you monitor a dynamic, distributed environment. Automation is especially employment for deployment and baselining. Relying on a team to manually implement monitoring configuration and instrumentation tasks is time-consuming and expensive. It also makes it harder to keep the monitoring tools updated. Even better, select a monitoring tool that is inherently automated and frees you from the toil of maintaining monitoring configurations as your code evolves.

Automated monitoring also helps minimize blind spots and increase observability, enabling more contextual, accurate insights. You can use a CI/CD tool to store environment-specific parameters packaged with every delivery. It can execute processes such as making service calls.

Implement continuous testing by automating regression and performance tests. CI/CD pipelines usually incorporate various forms of automation to improve code quality and accelerate delivery processes.

3. Configure Alerts Correctly

Take the time to outline the types of alerts required by various teams to help them identify problems quickly. Proper alert configuration is important for preventing alert fatigue and ensuring alert specificity to minimize false positives. An effective alert strategy helps reduce response times so teams can solve issues faster. You can automate baseline creation to facilitate alert configuration, automate root-cause-analysis, and prioritize alerts.

4. Prioritize Alerts

Group alerts based on their business impact to help teams prioritize high-risk alerts. Risk classification and prioritization are important for focusing efforts on relevant issues, saving time, and preventing the worst damage. Different alert groups can generate alerts sent to different teams for specialized treatment.

5. Create Specialized Dashboards

Create custom dashboards to provide specific teams and analysts with the relevant monitoring data. You can have a different role-specific dashboard for each team to prevent team members from viewing sensitive or irrelevant data. There should be a unifying, coherent data model underlying the data across your specialized dashboards.

Cloud Native Monitoring with Lumigo

Lumigo is a cloud native observability tool, purpose-built to navigate the complexities of microservices. Through automated distributed tracing, Lumigo is able to stitch together the distributed components of an application in one complete view, and track every service of every request. Taking an agentless approach to monitoring, Lumigo sees through the black boxes of third parties, APIs and managed services.

With Lumigo users can:

  • See the end-to-end path of a transaction and full system map of applications
  • Monitor and debug third party APIs and managed services (ex. Amazon DynamoDB, Twilio, Stripe)
  • Go from alert to root cause analysis in one click
  • Understand system behavior and explore performance and cost issues
  • Group services into business contexts

Get started with a free trial of Lumigo for your microservice applications.