Deconstructing Distributed Tracing

Rate this content
Bookmark
Slides

Distributed tracing is a powerful technique that allows you to track the flow and timing of requests as they navigate through a system. By linking operations and requests between multiple services, distributed tracing provides valuable insights into app performance and helps identify bottlenecks. In this talk Lazar will explain the concept of Distributed Tracing by walking you through how monitoring tools build tracing solutions.

This talk has been presented at React Day Berlin 2023, check out the latest edition of this React Conference.

Watch video on a separate page

FAQ

Distributed tracing is a technique used to track the flow and timing of requests and operations within a system, particularly useful in full stack and microservice applications. It helps in understanding system performance and identifying bottlenecks.

Distributed tracing was developed as a response to the limitations of traditional debugging tools like log files, which became insufficient as software architectures evolved into more complex, asynchronous, and distributed systems.

Distributed tracing works by creating a 'trace' for each request, which follows the request through the system and captures data about various operations or 'spans'. Each span records information such as start time, end time, and parent-child relationships among spans.

The key components of a distributed tracing system include traces, spans, and trace context. Traces represent the entire operation flow, spans represent individual units of work, and the trace context helps in linking spans across different services or containers.

Distributed tracing improves debugging by providing a detailed and structured view of the operations across different services and machines. It allows developers to easily identify performance issues and understand complex interactions within their applications.

In distributed tracing, spans are the fundamental units that describe specific operations, such as an HTTP request or a function call. Spans can create child spans, forming a hierarchical structure that mirrors the application's operations.

A trace context in distributed tracing is a mechanism that concatenates the trace ID and the ID of the last span into a string. This string can be transferred across different backends or processing units to continue the trace seamlessly.

As software architectures evolved into using microservices, asynchronous programming, and containerization, traditional debugging methods became inadequate. Distributed tracing emerged as a necessary tool to handle the complexity and distributed nature of modern applications.

Lazar Nikolov
Lazar Nikolov
8 min
12 Dec, 2023

Comments

Sign in or register to post your comment.

Video Summary and Transcription

Distributed tracing is a powerful technique for tracking requests and operations in a system, especially in full stack and microservice applications. The reinvention of distributed tracing introduces the concept of a trace and spans to capture debugging data. Enhancements include tags and a status field for better analysis, and the distribution of traces using a trace context for continued tracing.

1. Introduction to Distributed Tracing

Short description:

Distributed tracing is a powerful technique that helps track the flow and timing of requests and operations in a system. It is especially useful for full stack and microservice applications, allowing for better understanding of system performance and identification of bottlenecks. The technique has been around since the early 2000s but gained popularity in the 2010s. As libraries and frameworks evolved, so did debugging tools, from logs in Apache Server to handling multiple requests in a single process with separate threads. With advanced concurrency, frameworks like Node.js allow requests to start and finish in different threads.

♪ ♪ Reconstructing distributed tracing. Hello, everyone. My name is Laza Nikolov, and I am a developer advocate at Sentry. Today on my talk, we're going to talk about distributed tracing. First explain what it is. Then we're going to get into a little history on the debugging tools to find out why distributed tracing existed in the first place. And then in order to understand it better, we're going to rebuild distributed tracing from scratch or at least just the concept of it.

All right, so let's dive in. Distributed tracing is a powerful technique that allows you to track the flow and timing of requests and operations as they flow through your system. This is especially useful for full stack and for microservice applications. Distributed tracing helps you understand the performance of the system and also identify any bottlenecks. It's especially useful for debugging complex and weird bugs like race condition bugs that require a lot more than just a console lock and a stack trace. It's not new by any means. There are white papers mentioning tracing since the early 2000s, but it got popularized during the 2010s. So to understand why it exists, we need to go back in time.

As our libraries and frameworks evolved, so did our debugging tools. For example, back in the early days of Apache Server, logs were one of the few methods for debugging. As requests arrived, Apache forked a child process and handled the requests. If you wanted to debug what happened during that specific request, you could just pull the process's logs and you'll see the whole operation flow. And that worked. We were happy. Then we got basic concurrency. Think of IIS in ASP.NET. Instead of forking a process for every request, we started handling multiple requests in a single process, but in a separate thread. Logs are still a good debugging method, but to isolate the request's logs, we need to prefix them with the thread name and then filter the log messages based on it. Not a big deal, but we made it work. Then we got advanced concurrency. Our frameworks evolved into async, multithreaded, futures and promises, event loop-based frameworks. This is Node.js. So now our request can start at one thread, but finish at a different one, going through many other threads along the way.

2. Reinventing Distributed Tracing

Short description:

Prefixing logs with a unique ID for each request no longer solves the problem in a distributed system. With the rise of containerized services, backends are spread across multiple machines, making it difficult to trace operations. To address this, we reinvented distributed tracing from scratch. We introduced the concept of a trace, which follows a request and captures debugging data. Within the trace, we have spans that represent the smallest unit of work, such as an HTTP request or a function call. Spans can create child spans, allowing us to mirror the structure of our software. Each span has a unique ID and holds data like its parent ID.

Prefixing them with the thread name doesn't really solve our problem now. We need to prefix them with something unique to the request itself, and that's what we did. We generated a unique ID for each request and prefixed it, our logs.

But our frameworks didn't stop evolving. About 10 years ago, Docker and AWS made way for containerized services. And now our backends don't even live on one single machine. Each container and microservice handled multiple requests and produced its own logs. Our logs are all over the place now. It was very hard to make sense of the operation flow, so we needed a better debugging tool that can trace the operations as they jump between containers and services. That's when distributed tracing became a necessary tool for debugging.

In order to understand how it works, we're going to reinvent it from scratch. Since our backends now have a very distributed nature, we needed to define a vehicle for each request that will follow it around and capture debugging data along the way. Let's call that a trace. The trace will start when the operation flow starts, and it's going to have a unique ID. That can be the frontend, for example.

If we think about logs, they usually tell us what happened at a particular time. They try to mimic the structure of our code. So let's invent that now. Let's invent something that's going to describe the smallest unit of work, like an HTTP request or a function call or anything specific that our software does at a specific time. We're going to call that a span, and we're going to create one immediately when the trace starts. That's going to be our root span.

So just like the log, the spans are going to mimic the structure of our software. But since we're reinventing it, let's make it much smarter than simple messages. So since spans are the smallest unit of work, like a single function, and we know that one function can invoke another function, which in turn can also invoke a third function, we're going to design our spans so they can create child spans, which can to create their own child spans and so on. Now we can really mirror the structure of our software with this. We have a span hierarchy, but we need to remember which span is a child of which span. To do that, we're going to need something to identify each span. So we will assign an ID to each span as we create them. We also need to save the parent span ID. So let's create a space inside each span so it can hold data like its ID and its parent ID.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.
Optimizing HTML5 Games: 10 Years of Learnings
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Optimizing HTML5 Games: 10 Years of Learnings
Top Content
PlayCanvas is an open-source game engine used by game developers worldwide. Optimization is crucial for HTML5 games, focusing on load times and frame rate. Texture and mesh optimization can significantly reduce download sizes. GLTF and GLB formats offer smaller file sizes and faster parsing times. Compressing game resources and using efficient file formats can improve load times. Framerate optimization and resolution scaling are important for better performance. Managing draw calls and using batching techniques can optimize performance. Browser DevTools, such as Chrome and Firefox, are useful for debugging and profiling. Detecting device performance and optimizing based on specific devices can improve game performance. Apple is making progress with WebGPU implementation. HTML5 games can be shipped to the App Store using Cordova.
Power Fixing React Performance Woes
React Advanced Conference 2023React Advanced Conference 2023
22 min
Power Fixing React Performance Woes
Top Content
Watch video: Power Fixing React Performance Woes
This Talk discusses various strategies to improve React performance, including lazy loading iframes, analyzing and optimizing bundles, fixing barrel exports and tree shaking, removing dead code, and caching expensive computations. The speaker shares their experience in identifying and addressing performance issues in a real-world application. They also highlight the importance of regularly auditing webpack and bundle analyzers, using tools like Knip to find unused code, and contributing improvements to open source libraries.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
Featured WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
WorkshopFree
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
React Performance Debugging
React Advanced Conference 2023React Advanced Conference 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
High-performance Next.js
React Summit 2022React Summit 2022
50 min
High-performance Next.js
Workshop
Michele Riva
Michele Riva
Next.js is a compelling framework that makes many tasks effortless by providing many out-of-the-box solutions. But as soon as our app needs to scale, it is essential to maintain high performance without compromising maintenance and server costs. In this workshop, we will see how to analyze Next.js performances, resources usage, how to scale it, and how to make the right decisions while writing the application architecture.
Maximize App Performance by Optimizing Web Fonts
Vue.js London 2023Vue.js London 2023
49 min
Maximize App Performance by Optimizing Web Fonts
WorkshopFree
Lazar Nikolov
Lazar Nikolov
You've just landed on a web page and you try to click a certain element, but just before you do, an ad loads on top of it and you end up clicking that thing instead.
That…that’s a layout shift. Everyone, developers and users alike, know that layout shifts are bad. And the later they happen, the more disruptive they are to users. In this workshop we're going to look into how web fonts cause layout shifts and explore a few strategies of loading web fonts without causing big layout shifts.
Table of Contents:What’s CLS and how it’s calculated?How fonts can cause CLS?Font loading strategies for minimizing CLSRecap and conclusion