At Lumigo, building developer-first tools has always been at the forefront of our approach to troubleshooting and debugging. As developers ourselves, we have experienced firsthand the frustration and intricacies of sifting through logs looking for answers. We’ve also felt the pressure of the clock ticking, with production issues waiting to be resolved and the need for timely answers to surfaced application issues. We understand all too well the need for clarity and the wish for tools that don’t just identify problems but also demystify the ‘why’ behind them.
Building on our extensive knowledge and expertise learned from serverless observability over the last five years, we’ve harnessed the power of auto-instrumentation to address the challenges of containerized environments. We took our learnings from the serverless world, understanding the unique intricacies it presents, and applied this deep insight to the realm of containers. More than a mere add-on, this development signifies a significant leap in our journey to refine observability for all microservice setups.
Microservices, by their very nature, are composed of myriad components. While this is key in its adaptability and scalability, it also introduces a multifaceted complexity when it comes to debugging and troubleshooting. Each service, node, pod, container, cluster, and namespace adds another dimension to the challenge. The increased number of moving parts necessitates a solution that can provide clarity amidst this inherent complexity.
Distributed tracing, particularly through tools like OpenTelemetry, has become a cornerstone in addressing the intricate nature of microservices. Offering an invaluable lens, especially as services sprawl across disparate boundaries. As OpenTelemetry gains traction and acceptance within the wider tech industry, we, as developers, are discovering more intuitive ways to delve into microservices.
At Lumigo, our mission is to streamline the journey, tapping into the vast potential of OpenTelemetry with unparalleled ease. With 1-click OpenTelemetry ensures rapid access to the contextual information essential for effective troubleshooting, without the need for cumbersome deployments or code changes. Today, many developers are already addressing troubleshooting up to 80% quicker using our 1-click OpenTelemetry solution. Moving beyond the daunting task of combing through extensive logs. This signifies a transformative approach to the way we approach troubleshooting, pinpointing issues, and rectifying them effectively.
In Kubernetes, deploying applications involves interacting with its APIs using tools such as kubectl, Helm, and Terraform. Through these interactions, resources like Deployments and DaemonSets are scheduled, managing the lifecycle of the pods and containers that host applications.
A remarkable feature of Kubernetes is its extensibility through operators. Operators are software extensions that leverage custom resources to manage applications and their components. Essentially, they automate repetitive tasks in Kubernetes that go beyond the default settings.
To offer 1-click OpenTelemetry within a Kubernetes environment, We were inspired to leverage this mechanism with our Kubernetes operator. Our primary goal with the operator is to not only make it as quick and easy to get up and running, but to additionally ensure ease to extensibility during the use of the operator and as new Otel methodology surfaces.
The installation process is as simple as running a little helm:
helm repo add lumigo https://lumigo-io.github.io/lumigo-kubernetes-operator
helm install lumigo lumigo/lumigo-operator --namespace lumigo-system --create-namespace --set cluster.name=<cluster_name>
Then a little yaml configuration to include a few variables as part of the deployment. Tracing all applications within a namespace requires just a single kubectl apply, which also includes referencing your Lumigo token:
token: *lumigo-token ---
And that’s it!
Once these resources are deployed, the Lumigo Kubernetes operator instantaneously:
The crux of Operators magic lies within its injector. While having the correct tracer in a container is essential, it’s insufficient without the necessary code modifications to activate it. Our goal is to provide seamless, code-modification-free tracing. Using specific environment variables tailored for each runtime (e.g., Node.js, Java or Python), our operator ensures the appropriate environment variables are deployed without confusion or unnecessary overlaps.
Looking deeper at the mechanisms behind our Kubernetes operator, the fundamental requirements for tracing an application within a container include having the necessary tracer files and a means of activating the tracer. We achieve this by modifying the PodSpec of Kubernetes.
For instance, a simple Python pod scheduled in Kubernetes would appear differently post-injection by the Lumigo Kubernetes operator. This modification includes:
Furthermore, Our injector operates on the principle that most runtime environments rely on shared libraries, particularly the standard C library, LibC. This library provides a common API for operations such as reading/writing to files and sockets and, importantly, accessing the process environment.
To further investigate the details behind our Kubernetes Operator, see the blog post The Magic Behind the Lumigo Kubernetes Operator. Additionally, you can test the Operator out on your Kubernetes deployments by visiting the Kubernetes Operator documentation.
As the industry continues to shift towards more microservice-based applications, the complexity of debugging these environments grows. Lumigo addresses this head-on, seamlessly blending our expertise in auto-instrumentation with the power of OpenTelemetry. Our 1-click OpenTelemetry isn’t just about ease; it’s a testament to our commitment to help swiftly identify and debug microservice issues, ensuring efficient troubleshooting every step of the way.
To get started using Lumigo in your microservices environment, create a free Lumigo account today.