Video Summary and Transcription
Sentry is code monitoring for developers, specifically designed for the application layer. It helps identify error details, frequency, release, user information, and stack trace. Source maps can be uploaded to see the original source code and suspect commits can be identified. Performance monitoring helps identify slowdowns and determine the cause. Automating alerts and investigating errors helps gain instant context and trace errors across different projects.
1. Introduction to Sentry and Error Monitoring
I'm Chris, a solutions engineer at Sentry. Today, we'll discuss monitoring errors and slowdowns in JavaScript. Sentry is code monitoring for developers, specifically designed for the application layer. We'll cover error monitoring and performance monitoring. On our demo site, we encounter a slow products endpoint and an error. With Sentry, we can identify the error's details, including its frequency, release, user information, and stack trace. Breadcrumbs provide insights into the user journey leading to the error.
Hey everyone, my name's Chris. I'm a solutions engineer at Sentry and I'm here to talk to you today about monitoring errors and slowdowns in JavaScript. Sentry is code monitoring for developers. We tell you when your code's broken and we tell you when your code is slow. We're not infrastructure monitoring, we're not analytics monitoring, specifically we live at the application layer, we're designed for developers.
Today we're going to talk about two main areas, the error monitoring and performance monitoring sections. We'll head to docs.sentry.io, take a look at the node documentation, it's a simple NPM install, and initialize the SDK with sentry.init and a few configuration options. Very easy to get started.
We land here on our demo site where we're going to purchase some eCommerce products. We notice that the products endpoint is taking a while to load, so we'll come back to that in a little bit. We'll go ahead and add some products to the cart. Check out and wait to get our products, and we see an error. So without something like sentry we're not going to know something was wrong, both in terms of that products endpoint being slow, as well as, more importantly, in terms of having this error.
So what we'll see here is, at the current time we just get an error and it shows up in Slack. We can see that it's a 500, we can see it's happening in production, we can see the release it went out at, maybe this was something that got released earlier today, recently. But we know it's happening in production, it's important, we're going to click through. Here we're on the issue page. This is the Who, What, When, Where, Why of the error. We can see what the error is, it's 500, it's happened 45 times to 22 unique users. It's happened however many times in the last 24 hours, and this many times 30 days. It was originally seen back five months ago, and it was most recently seen, understandably, because we just triggered it a few seconds ago in this more recent release. We can see that the user was on Chrome, and they were on Mac OS X. We get a little information about their email address, and any other custom tags that we've set here. We've got the stack trace that tells us the error type and message. You can see, all right, if the response wasn't saved, and we're going to manually call capture exception, Century SDK. Century attaches to the global error handler, and it will automatically capture any un-caught, un-handled errors as well. You can also manually, for instance, if you're handling errors, like in this case, you can also manually capture them. Breadcrumbs give us a sense of the user journey leading up to the error. What was the user doing right before the error occurred? You can see that just before there was a POST request to the checkout end point, some print statements, a CLICK. The other thing that's really important to mention is we're seeing a human-readable version of this.
2. Analyzing Errors and Performance
If you have raw minified stack trace, you can upload source maps to see the original source code. Sentry integrates with source code management tools to identify suspect commits. It traces child error events and provides insights into different projects. Once the root cause is identified, attention can be turned to performance. The user misery score reflects slow performance, and transactions can be analyzed to identify slowdowns. Context and breadcrumbs help understand user actions, and tags provide additional information. Sentry's tracing feature helps pinpoint issues in the back end, such as inefficient database queries. By consolidating all relevant context, Sentry simplifies error and performance problem solving.
So if you have raw, your raw minified stack trace, because you have bundled JavaScript code that's been minified, you're going to see something not useful like this. When you upload source maps, you're going to be able to see the original human-readable source code.
We also have integrations with different source code management tools, so for instance you can see these suspect commits, and this gives us a sense of who might have committed code that caused this problem. We can also see that there's a child error event, and we can trace that across. There's another project, in this case our Node Express app, and we can see that there's a different error message. Not enough inventory for product.
It's happened 87,000 times to 85,000 unique users, so clearly this is not a new issue. It's happened a lot more frequently in the last 24 hours and 30 days than the prior issue, considered it as most recently seen and first seen. All the same deal, the who, what, where, why, when. So there's not enough inventory for the product, and we throw a new error. So at this point, we've traced it down to the root cause. We can consider this solved and turn our attention to performance.
So if you recall, we're back here, we clicked on the products end point, and we saw that there was some slow performance. Now, we can also see that reflected within Sentry itself. You can see a number of Google's web vitals, so just the standard SEO-related things, like how long it takes for the biggest thing to show up on the page, the first thing to show up on the page, and we can also head over to look specifically at our general transactions. You can see there's a user misery score that's quite high here for the products end point.
So if we didn't already know what we were looking for, we'd be able to see this. This is also configurable. If you have end points, you know, we're going to take a long time. But basically, it's a way to helpfully see what are different transactions that are taking a lot longer than we expected. I'll click through to here, take a look at some of our recent transactions. And we can likewise see this shows a lot of different resources and assets browsers loading can see the react components are mounted, updated, were able to expand this and see that in a back end project, there was an HTTP request that took about 7.2 seconds out of the total 7.8.
So in this case, there's a slowdown, it looks like most of these things are not contributing to it, but this is the culprit here on this page, we also get context, we have similar breadcrumbs. What was the user doing during the time leading up to this point? Any more information here? We have a bunch of different tags we can access, as well as we can also use centuries tracing feature to head over to again, our node project on the back end and realize okay, this is where things are actually going wrong. So it looks like there's some database queries happening here. And in this case, looks like we're doing some sequentially. So we're fetching individual product IDs rather than all the products at the same time or a set of product IDs. So this might be an area where there's room for improvements. So we traced it from the front end over to the back end, without having to do a bunch of looking at logs for both applications. centuries goal is to basically consolidate all the context that you need to solve errors and performance problems and put them in the same place.
3. Automating Alerts and Investigating Errors
Automate tasks and set up alerts to be notified in Slack. Create alerts directly from the products page to monitor transaction performance. Email the team and send Slack notifications based on critical or warning status. Surface errors and gain instant context. Investigate error details, occurrence frequency, and impact. Trace errors across different projects. Performance monitoring helps identify slowdowns and determine the cause.
Automate a lot of the things that would have taken your time. At this point, we could go back, we could also even set up an alert in the same way so that we get notified in Slack and kind of the same way. So you might see if we go back to the products page, we can create an alert directly from here. That'll give us a sense of the transaction slash products. Let's say if it's above eight seconds, it's considered critical if it's above four seconds for this transaction, and maybe six seconds it's considered warning. Consider resolved if it's below three. And you can see that reflected here.
So what we can then do is add an action, so maybe on critical status or warning status, we email the team. And on critical status, we also go ahead and send something over via Slack. So now we've surfaced, we're gonna be helping ourselves to surface any problems with this end point, this transaction in the future. We've surfaced an error. In this case, it showed up in Slack. We had instant context into, okay, is this important or not? Do I need to do something with it? We clicked through and we got the who, what, where, when, and why of the error. We looked at how many times it had occurred, how many people it was affecting, and when we decided it was worth our time solving, we dug in, we traced it across to a different project that we had that implemented Sentry, and we went ahead and found the root that tracing feature is going to be enabled if you just have two projects, two applications that both have Sentry initialized and configured correctly. But that's a very easy thing to do.
Performance monitoring, same deal. We surfaced a slowdown. We figured out what went wrong.
Comments