Can Machines Learn Bug Language?

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Is communication with bugs possible? It is important to understand them because there is valuable information underlying the bugs of software, and it promises quality improvements. Let's talk about how we can analyze reported issues to improve the future of the product by means of Artificial Intelligence.

This talk has been presented at Productivity Conf for Devs and Tech Leaders, check out the latest edition of this Tech Conference.

FAQ

Mesut Turkal discusses the importance of analyzing software bugs to improve product quality and the role of machine learning in enhancing this process.

Analyzing software bugs is crucial because it helps identify improvement points, understand root causes, and enhance both software quality and the development process.

Machine learning can automate the analysis process, identify patterns, predict bug severity, and provide insights that are consistent and based on past data.

Key factors include the visibility, severity, and criticality of bugs, as well as the environment in which they occur and their root causes.

Escape bugs are those that make it into the production environment. They are important because they are costly to fix and identifying their root causes can prevent future occurrences.

Preparations include optimizing the project management environment, automating queries, and creating dashboards for better visualization and understanding of data.

Dashboards provide visual representations of bug data, making it easier to identify trends, distributions, and potential areas for improvement.

Proper communication, akin to asking the right questions, allows for a deeper understanding of bugs, leading to more effective solutions and process improvements.

The 'airport analogy' emphasizes the importance of clear communication, proper data formatting, and noise reduction for effective bug analysis.

Automation streamlines the bug analysis process by executing queries and calculations, thus saving time and reducing manual effort.

Mesut Durukal
Mesut Durukal
27 min
27 Mar, 2025

Comments

Sign in or register to post your comment.
Video Summary and Transcription
In this Talk, the speaker discusses the importance of analyzing bugs in software development. Analyzing bugs goes beyond just counting them; it helps improve processes, understand root causes, and gain insights through machine learning. The Talk emphasizes the need for proper communication with bugs and the importance of preparing the bug analysis environment. Automation and visualization can improve bug analysis efficiency, and a comprehensive analysis of different metrics is necessary to gain insights for improvement. The Talk also highlights examples of bug analysis and the use of machine learning for pattern recognition. Bug severity estimation using machine learning algorithms and the use of clustering for bug analysis are also mentioned. Overall, the Talk provides valuable insights on bug analysis and its role in improving software quality and productivity.

1. Introduction to Bug Analysis

Short description:

In this session, I will talk about the bugs that we might have in our product or the software that we are developing. Bugs are direct indicators of quality, but analyzing them goes beyond just counting. We will discuss how to analyze bugs and collect insights through machine learning.

Hello, my name is Mesut Turkal. I'm a Software Quality Assurance Automation Engineer and in this session, I will talk about the bugs that we might have in our product or the software that we are developing. Because there are several things that might relate to software quality or the product quality, but bugs or the issues, vulnerabilities, those kind of things we can improve on the product that we are delivering to the customers are one of the direct indicators of the quality, right?

Because the more bugs we have, probably the less quality. I'm saying probably, because that's not the case always. Like only checking the number of bugs might be misleading. Like sometimes we might have very few bugs, but they might be very critical and vital and very visible bugs. On the other hand, there might be several little minor issues that might be less critical. So we already started talking about the importance of analyzing the bugs, right? Not only the number, but also visibility, severity, criticality, lots of different things. So we will discuss in detail how we can analyze bugs to get some, to collect some insights about the improvement points. So this is what we will do.

And how we will do is by getting some help from the machines, by utilizing machine learning, right? Why? Talking about the productivity, efficiency, how easily we can maintain our tasks or finish our tasks to deliver our products. So of course, saving some time is very important. Doing things more efficiently is very important. So this is what we will discuss in depth. What we will do is analyze the bugs, try to collect some insights from them. And how we will do is by help of, by means of machine learning. And we will relate this machine learning with our biological learning as well. I'll give some examples and let's get started by discussing in detail.

2. Analyzing Bugs: Improving Quality and Processes

Short description:

Resolving bugs improves short-term quality, but analyzing them goes beyond fixing software. It helps improve processes and understand root causes. By communicating with bugs and asking the right questions, we can gain insights and improve productivity. Building a proper communication with bugs is crucial for long-term improvement.

So what we are doing by resolving bugs is, of course, momentarily we are improving the quality, right? If we resolve a bug, it means we are getting rid of some problems. This is the short term goal, or the short term win. How about improvement rooms in the long term? Like if we somehow understand some improvement from the bugs in the long term, we will not be improving only the software itself, but also we can improve our processes to develop those software or the product. Like if there is something that we can do in a better way to develop that software, then why not instead of just fixing the software, fixing the way to develop the software? So this is collecting some insights and trying to understand not only the ways to fix the bug, but also why this bug happened. The root cause, or the other related issues, some hidden patterns. So you realize we already started talking about some patterns and the models, right? We're building a model. So this is the basic principles of machine learning. We will discuss in detail again, but these two different dimensions are the big wins that we can get from analyzing the bugs.

What does analyzing the bugs mean? Actually it means building a proper communication to them, right? Talking to them. Like asking proper questions to a bug, like why did it raise? In what environment did it raise? What other issues is it related to? What other issues is it familiar with? So these kinds of questions, if we ask the proper queries to the bugs, we will get some answers. So it's kind of communication, right? If we talk in the correct language, then we will get a lot of answers from the bug. So this is talking to bugs. I have a lot of stories around building a proper communication with the bugs. Once I have a bug, which was live for a very long time, which is an old, late bug, I'm saying. So when I asked the questions like why it was not resolved for a long time, then I learned a lot of things that I can improve in my process. Like why it was not resolved because there were some testability issues. It was very difficult to replicate, duplicate the bug. There were some different difficulties, challenges. So by analyzing them, I tried to find some solutions to get rid of these root causes. And there were some other stories, like where I found how an escaped bug was going into production. Because escaped bug means normally we try to keep them in our testing environment or the development environment only. We try to close the gates to the production environment. But some of them somehow found a way to go to production. So when I asked the core questions, then I figured out how they managed to escape to the production environment. And then I closed those gates as well. So again, by building a good and proper communication to bugs, we can improve our productivity. Because again, we are talking not only about short-term wins, but also maintaining a long-term improvement, improving the processes, the ways to build and develop and deliver our products.

So going over an analogy to better explain the situation, let me explain you a scenario where we are waiting for our flight in the airport. And we have our flight and we know our gate, but they decided to change the gate. So this is an information that I have to have.

3. Preparing Bug Analysis: Environment and Monitoring

Short description:

To analyze bugs effectively, we need visibility, proper context, and a noise-free environment. Preparations involve adapting and optimizing the bug management environment, including customizing the bug lifecycle. Monitoring activities, such as checking bug status, are also important.

Like the managers should distribute, spread this news to all passengers waiting for the gate. So what do we need? First of all, they need some screens or loudspeakers where they can show the information or they can tell the information by loudspeakers, right? If not, then I will not be seeing the message or I will not be hearing the message. So we are talking about there's no visibility, right? Or there is no transparency. So this means the environment or the tooling that we are using is very important.

Secondly, if they have screens or loudspeakers, but if they are showing or telling the message in a language that I don't understand. Like, okay, there's an announcement, but I don't know what they are saying. It is a language that I don't understand. In this case, what's the problem? The problem is I don't understand because the data was not transformed into the proper context that I can understand, right? So it's this kind of bad formatting or a different protocol that they are using that I don't understand. So this is the second problem.

And lastly, if they have all the tooling and the environment and if they are transforming data into a protocol that I can understand, but still there might be an issue, which is if the environment is too noisy, even though I can understand the language, I don't get it. I don't hear what they are saying. So it is the noise. It is the missing cleaning of the data, right? So data mining, getting the correct context from all the raw information, raw data. So all this kind of analogy will give us an understanding of what we should do before we start analyzing the bugs.

So first of all, preparations. What kind of preparations we can do? Adapting and optimizing our environment. For example, on the slide, I have a depiction of the lifecycle of bugs. So normally in my project management tools, like Jira or anywhere else, if I go with the default lifecycle, default setting, then most probably it would be new, in progress, closed, and something like that, right? Pending for review. 3-4 status. But if I add some more, if I customize, if I optimize my environment, I can increase the granularity. Like not only new, in progress, closed, but also like fixed, assigned, or deferred, rejected, lots of different status information I can add and I can have more granularity. Like if it was in progress or if it was not in progress, I can check the reason. Like maybe it was rejected. This is why it was not in progress, but it was just put into rejected status. Or it was deferred, like it will be fixed, but not now. Maybe it is not very prior, right? There are some more priority tasks. So this was deferred. So optimizing and customizing the environment is the first thing that we can do.

What else? We are doing monitoring activities, right? We are checking the status of the bugs. How can we do that? Of course, I can open my project management tool and send some queries, like just open the UI and navigate to the correct URL.

4. Bug Analysis: Automation and Visualization

Short description:

Automating the bug analysis process through code and using dashboards to visualize data can improve efficiency and understanding. Analyzing bug distribution by status and severity can help prioritize bug fixing.

This is what we can do. But this is the manual way. How about I automate this process? Can I do? Of course, like I can develop a code which makes this query instead of me, instead of going navigating to the UI and putting the correct query, just develop the code and let it run on our jobs, on the jobs that we prepared. So the code can also already ping the servers and get the collector response and parse the response. This is a way smarter way compared to what I'm doing manually, right? So in this, by this code, I can parse the response and get when the issue was created. I can already read the timestamp and I can read when it was fixed. And I can do some calculations on the code, like I subtract the creation date from the fixed date. So what I can realize is the duration of the fix, right? From creation to fixed date. I can already check the duration, the time I need to fix this bug. Then I can get a lot of bugs, a lot of issues, and I can calculate the average. I can calculate standard variation, maximum time to fix a bug, minimum time. So I can get lots of different statistics. And we will discuss how we can use this information later on. But automation was a second preparation that we can do before starting our analysis.

And lastly, we can use the dashboards to improve our visual understanding. Like if we work only with numbers or raw data, of course it is still beneficial. But if we transform this raw information to some visual graphs, dashboards, it will be way easier to understand what's happening in our environments. Like if there is a trend, we can see if the trend is increasing or decreasing. We can directly see from the charts, right? It is way easier to understand. So you can use different monitoring graphs or dashboards. In lots of different project management tools, it's very easy to create some dashboards. But otherwise, it's possible to use some third-party tools to create these kind of dashboards.

In the first introduction session, we already talked about the importance of analyzing the bugs and the preparations that we can do before we start this analysis. So in the second section, let's start talking about what kind of information we can mine from this analysis. So I will show lots of different actual project data and examples and give you some idea about what kind of statistics or the metrics you can track for your monitoring activities. First of all, maybe you can show the bugs distribution over their status or severity. Like what percentage of bugs are still in progress? And what percentage is already closed or already fixed? This percentage or distribution you can show by each sprint. And if somehow the percentage of the bugs which are not fixed yet, which are still waiting for fixed, is increasing, then you can take some action items. Like maybe you can freeze the new development activities and first focus on fixing the current bugs. And the severity, like most of the bugs I'm finding are minor issues.

5. Bug Analysis: Metrics and Insights

Short description:

Concentrating on a single metric may not provide useful information. A comprehensive analysis of different monitoring activities is necessary to gain insights for improvement. The most important metrics depend on your purpose and priorities.

Then maybe this is something that I can improve in my process, right? Maybe I'm concentrating on the wrong things. So if I change my perspective, I can find more severe and more critical bugs. So showing this distribution might give some ideas. And again, I will show a lot of graphs, but let me remind you one important note, which is if you concentrate solely on one metric, like just the percentage of the status, then it might not give a useful information. Like 50% of my bugs are in progress, so what? It might not have a proper meaning, a decent meaning. But if we collect all these kind of different monitoring activities with a comprehensive analysis, then we can have some insights about improvement. And which metric is the most important? Again, I cannot tell a clear answer. This depends on your purpose and your priorities, your requirements. What's your purpose? If your purpose is trying to minimize the number of escape bugs going to production environment, then you can concentrate on the related metrics. But otherwise, if you concentrate on minimizing the time to fix the bugs, then you can concentrate on the related metrics. So I will show a lot of them, and it is your task to pick the most relevant metrics, which serves to understand the solutions or the ways to achieve your purposes and requirements.

6. Bug Analysis: Examples and Machine Learning

Short description:

Showing the distribution of bugs per service or module can help identify issues with specific components. Analyzing bugs by type can lead to improvements in documentation and security tests. Monitoring the progress of bug detection and resolution duration can uncover underlying issues and guide appropriate action. Tracking the age of bugs and addressing escape bugs in production are crucial. Utilizing machine learning for bug analysis can enhance productivity and pattern recognition.

The second example I will show is the bugs distribution per service or per module. On a backend service, I can have lots of different microservices, and if I show the distribution, then there might be a case like 50% half of the bugs are coming from one specific service. What does it mean? There is something wrong with this service, right? Because there are too many services, but most of the bugs are coming from only one of them. Or if you are talking about a UI product, a web application, then like let's say most of the bugs are from the login screen, not from the other pages and other screens, but just from the login screen, which is supposed to be a tiny screen, a lot of bugs are coming. So there is something wrong there, right? Just go and check your tests, if they are capable of finding the bugs or not. So this might give you some ideas.

Similarly, we can show the distribution of bugs per their type, like if they are functional or security bugs, reliability or understandability bugs, so that we can improve maybe our documentation or improve security tests, and we can take some related action items. What else? We can show the bugs detection progress. Like this sprint we detect 5 bugs and next sprint we detect the 20 bugs. Again, just focusing on one very narrow metric, narrow monitoring, like this sprint we found 5 bugs. So what? This doesn't have a proper meaning, right? But if I accumulate with the other monitoring activities, like normally, every sprint, in average, I found 5 bugs or 6 bugs. But in this specific sprint, if I find 30 bugs, then obviously there is something wrong. Maybe I forgot to execute the test cases. So I can go and check my CICD jobs, pipelines, if tests were executed or not. Similarly, I can show the resolution duration as well. Not fix detection, but the resolution duration.

And maybe combining these two created versus results. And I can see the gap in between, right? If the gap is increasing, it means the bugs that I am finding is getting higher and higher, where the bugs are resolved, are getting in a relatively lower pace. Then it means the technical depth will increase. So maybe it might be time to take some action item and freeze the development activities and focus on fixing the bugs. And not only the charts or the pie charts or the trends, but also I can show some tables, like maybe the ages of the bugs, like for how long they are waiting for fix, or for how long they are fixed and waiting for validation. This time I can go and check the QA engineers, right? The development team fixed it. And why aren't you still verifying or validating? So last but not least, of course, we should talk about, I used this term a few times, but escape bugs are very important because it is super expensive, if some bug is already in production. But if I find already in testing, then it will be relatively easy, right? Because it is not the working function yet in production. So I can easily change the architecture or the design. But if it's already in production, it will be too late, too expensive. So please consider monitoring or tracking these escape bugs and try to understand the root causes, why were they escaped to production environment. So in the last section, of course, one of the most important things is trying to do all this analysis, trying to improve our productivity by using machine learning. So I told in the beginning, it is very similar to our biological learning, right? How do we learn the patterns? Like if I give you a sheet, Google sheet or Excel sheet or anything, any document, any type of document, where I have lots of bugs, what can you do? You can observe, right? For example, you can see what is the author of the bug, what is the title of the bug and you can get some observations. If this person is opening a bug, most of the time it is a very critical one.

7. Bug Analysis: Machine Learning and Preprocessing

Short description:

Observing and collecting past experiences to generate a model and predict feature samples is similar to biological learning. In bug triage, machines can determine severity more consistently than humans by analyzing past bug data. Preprocessing is essential to clean the data and remove noise.

It is already an observation. Or you can realize if there is a security word in the title, then most probably it is a very critical one. So this is our biological learning. It means observing and collecting the past experiences and generating a model in our mind, in our brain. And predicting the feature samples depending on based on this model. So it's very similar. While doing the machine learning, we let the model to be generated by the models, the robots. And then after we have the model already, then we will predict the feature samples. Let me show you a real life scenario. It will be much more obvious and easy to understand.

So in this example, what I did is, I exported all the software issues that I have in my project, which was almost 900 bugs. And then I export into an Excel file. So I have the description column and then the severity or the criticality levels, which are basically from severity level 1 to 4. There are four different levels. So my purpose is, trying to guess whenever I have a new bug, what would be the severity. Because normally what we do is, this is called a bug triage, right? Normally a few developers come together and decide the severity of the bug. Should it be severity 1, 2, 3, whatever. But sometimes it is not consistent. Because for me it is severity 1, for another colleague it is severity 2 or 3. But for machines it will be much more consistent. Because machines can observe all the other samples and try to convert this new sample with all the past experiences. For machines, analyzing all previous 900 bugs is super fast, like it happens in seconds or milliseconds. But for us, for human beings, analyzing previous 900 bugs is not feasible, right? We don't have enough time to do it. So let's see how we can let machines do this task for us. First of all, we do the preprocessing. Remember, we talked about the airport analogy. Cleaning, right? Getting rid of the noise. If the environment is too noisy, then it would be misleading. So what we can do is, first preprocess. Get rid of the unnecessary words or some other formatting issues.

8. Bug Severity Estimation with Machine Learning

Short description:

In machine learning, words are converted to numbers using a bag of words approach. Different classification algorithms can be used to determine bug severity. The accuracy rate can be improved by merging classes and leveraging machine analysis before making a final decision.

Then, in machine learning, we need to feature extraction, right? Because normally, words are things that we can understand as human beings. But machines can understand numbers. Actually, machine language is based on the binary values, right? Zero ones. So we should somehow convert these words or sentences to numbers. So how I did in this specific scenario is, I used a bag of words approach. Approach, which is just counting the words in a sentence. Which means, in each sentence, I will already have some numbers. It means I will have features for each sentence. And then, of course, I run different classification approaches. It might be KNN, K-Neighbors, Nearest Neighbors, Naive Bias, K-Mens Clustering. Lots of different approaches. And I can pick the maximum performance-giving algorithm. So let me show you the results.

For the bug severity estimation, in the first approach, where I have class 2, 3, 4 classes, then I had, this is the confusion matrix, so 73% accuracy rate. Let me quickly remind you of the distribution of the bugs, class 2, 3, 4. It is a little bit unbalanced, right? There's a bias on class 2. So what I did is, I combined class 3 and 4 because if class 2 is a hive, right? So if it is not a class 2 bug, then it means it is not blocking the release. It is not a release blocker. So it is an important decision to decide if it is class 2 or not. Or not means 3, 4, 5, whatever. So I merged all of them together, and class 2 as the other class. So this means I have two classes for this classification problem. And after doing this merging, now I have the 82% accuracy rate. So it's still not perfect because most of the machine learning algorithms should work over 90%, 95%, almost 100%, right? But this is a difficult problem. As we discussed in the beginning, like, even sometimes we decide in different ways. A bug is a very critical bug. For someone else, it might be less critical. But still, there are some improvement rooms. But what we can conclude is, even if we are not able to directly use it to decide about the severities of the bugs, I can already ease my task by first letting the machines do the first round. And then on top of these results, I can, as a human being, I can do the final decision.

9. Bug Analysis: Clustering and Metrics

Short description:

Not only classification, but clustering can also be done using machine learning. Summarizing the discussed topics: building proper communication with bugs, preparing the environment, analyzing metrics, using dashboards, automating queries, and leveraging machine learning for pattern recognition. Thank you for listening!

And not only the classification, but also I can do the clustering as well by means of machine learning. And in this way, again, I can easily manage the bugs and try to collect some insights or do the assignments and other tasks to resolve them.

So summarizing all the things that we discussed, it was a dense and a lot of information. A lot of information going on. So let's summarize what we discussed. First of all, building a proper communication to bugs is very important. We can call it lots of different insights. And then for doing that, please, first of all, prepare your environment. Do not rush. First of all, customize your tools. Make everything visible and transparent. And clean all the data you have. And then you can start analyzing.

And then what type of metrics or what kind of lessons you can learn from the bugs. There are lots of different metrics. Decide what you want to do. What's your purpose, what's your requirement? Then pick the most relevant metrics. Then you can use different dashboards or graphs to show these metrics and try to understand if you are approaching your goals or not. And finally, for all these analysis tasks, of course, you can automate your queries. And furthermore, you can use machine learning to reveal the underlying patterns for those bugs.

Machines can easily generate a model. Like if some of the bugs are heaping together on a specific feature, then you can easily be notified by the machine learning algorithm. Say, please, keep an eye on this feature. There might be a vulnerability. And we can easily get several improvement rules, improvement suggestions from these machine learning models. So it will improve our productivity and efficiency. So thank you very much for listening to me. And I'm leaving some socials here. If you have any questions, please do not hesitate to reach out.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Modern Web Debugging
JSNation 2023JSNation 2023
29 min
Modern Web Debugging
Top Content
This Talk discusses modern web debugging and the latest updates in Chrome DevTools. It highlights new features that help pinpoint issues quicker, improved file visibility and source mapping, and ignoring and configuring files. The Breakpoints panel in DevTools has been redesigned for easier access and management. The Talk also covers the challenges of debugging with source maps and the efforts to standardize the source map format. Lastly, it provides tips for improving productivity with DevTools and emphasizes the importance of reporting bugs and using source maps for debugging production code.
Debugging JS
React Summit 2023React Summit 2023
24 min
Debugging JS
Top Content
Watch video: Debugging JS
Debugging JavaScript is a crucial skill that is often overlooked in the industry. It is important to understand the problem, reproduce the issue, and identify the root cause. Having a variety of debugging tools and techniques, such as console methods and graphical debuggers, is beneficial. Replay is a time-traveling debugger for JavaScript that allows users to record and inspect bugs. It works with Redux, plain React, and even minified code with the help of source maps.
AI and Web Development: Hype or Reality
JSNation 2023JSNation 2023
24 min
AI and Web Development: Hype or Reality
Top Content
This talk explores the use of AI in web development, including tools like GitHub Copilot and Fig for CLI commands. AI can generate boilerplate code, provide context-aware solutions, and generate dummy data. It can also assist with CSS selectors and regexes, and be integrated into applications. AI is used to enhance the podcast experience by transcribing episodes and providing JSON data. The talk also discusses formatting AI output, crafting requests, and analyzing embeddings for similarity.
From Friction to Flow: Debugging With Chrome DevTools
JSNation 2024JSNation 2024
32 min
From Friction to Flow: Debugging With Chrome DevTools
The Talk discusses the importance of removing frictions in the debugging process and being aware of the tools available in Chrome DevTools. It highlights the use of the 'Emulate a Focus Page' feature for debugging disappearing elements and the improvement of debugging tools and workflow. The Talk also mentions enhancing error understanding, improving debugging efficiency and performance, and the continuous improvement of DevTools. It emphasizes the importance of staying updated with new features and providing feedback to request new features.
Forget Bad Code, Focus on the System
React Summit US 2023React Summit US 2023
27 min
Forget Bad Code, Focus on the System
Top ContentPremium
Watch video: Forget Bad Code, Focus on the System
Setting up the system and separating concerns are important in software development. Modular construction and prefab units are a new trend that makes construction quicker and easier. Architectural complexity can lead to a drop in productivity and an increase in defects. Measuring architectural complexity can help identify natural modules in the code. Best practices for avoiding architectural complexity include organizing code by business domain and using prop drilling. Atomic design and organizing a monorepo are recommended approaches for managing architectural complexity.
Confessions from an Impostor
JSNation 2022JSNation 2022
46 min
Confessions from an Impostor
Top Content
The Talk discusses imposter syndrome and reframes it as being a professional imposter. It emphasizes the importance of sharing and starting, embracing imposterism, and building inclusively for the web. The speaker shares personal experiences of being an imposter in various technical disciplines and highlights the significance of accessibility. The Talk concludes with the idea of building a collective RPG game to remove excuses for not making things accessible.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Tracing: Frontend Issues With Backend Solutions
React Summit US 2024React Summit US 2024
112 min
Tracing: Frontend Issues With Backend Solutions
Top Content
Featured WorkshopFree
Lazar Nikolov
Sarah Guthals
2 authors
Frontend issues that affect your users are often triggered by backend problems. In this workshop, you’ll learn how to identify issues causing slow web pages and poor Core Web Vitals using tracing.
Then, try it for yourself by setting up Sentry in a ready-made Next.js project to discover performance issues including slow database queries in an interactive pair-programming session.
You’ll leave the workshop being able to:- Find backend issues that might be slowing down your frontend apps- Setup tracing with Sentry in a Next.js project- Debug and fix poor performance issues using tracing
This will be a live 2-hour event where you’ll have the opportunity to code along with us and ask us questions.
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
The Clinic.js Workshop
JSNation 2022JSNation 2022
71 min
The Clinic.js Workshop
Workshop
Rafael Gonzaga
Rafael Gonzaga
Learn the ways of the clinic suite of tools, which help you detect performance issues in your Node.js applications. This workshop walks you through a number of examples, and the knowledge required to do benchmarking and debug I/O and Event Loop issues.
Instrument, Monitor, Fix: A Hands-On Debugging Session
React Summit 2025React Summit 2025
88 min
Instrument, Monitor, Fix: A Hands-On Debugging Session
WorkshopFree
Lazar Nikolov
Lazar Nikolov
You just shipped a new feature. Tests passed. CI’s green. Everything’s deployed. Life’s good… until the alerts start rolling in. Users reporting “weird stuff happening”, UI glitches, vague “it seems slow” reports, and that dreaded “can’t login” message. Well, this is shaping up to be a bad day. But… maybe not.Watch this hands-on session to learn how to build it, break it, debug it, and go from “no idea what’s wrong” to fixing issues—all in one go. Since we’re serious developers (obviously), we’ll use Next.js and:Setup Sentry from the ground up - including Errors, Session Replay, and TracingLearn ways to use Replays to understand the real user experience, and how to use Tracing to debug application issuesLeverage Sentry AI and the context of your application to understand what’s actually broken, and use Autofix to generate a fix - from root cause all the way to PR.Use this workshop guide to follow along: Quickstart Guide.
Solve 100% Of Your Errors: How to Root Cause Issues Faster With Session Replay
JSNation 2023JSNation 2023
44 min
Solve 100% Of Your Errors: How to Root Cause Issues Faster With Session Replay
WorkshopFree
Ryan Albrecht
Ryan Albrecht
You know that annoying bug? The one that doesn’t show up locally? And no matter how many times you try to recreate the environment you can’t reproduce it? You’ve gone through the breadcrumbs, read through the stack trace, and are now playing detective to piece together support tickets to make sure it’s real.
Join Sentry developer Ryan Albrecht in this talk to learn how developers can use Session Replay - a tool that provides video-like reproductions of user interactions - to identify, reproduce, and resolve errors and performance issues faster (without rolling your head on your keyboard).