JavaScript Source Maps, Can We Do Better?

Rate this content
Bookmark

The current revision of the JavaScript Source Maps specification is over 12 years old. Throughout this time, the entire ecosystem has evolved immensely, but for some reason, we have done nothing to improve the debugging experience and are still stuck at version 3 of the specification. Can we do better?

This talk has been presented at JSNation 2023, check out the latest edition of this JavaScript Conference.

FAQ

Kamil Ugurek is a senior software engineer at Sentry and a core team member of TRPC.

Source maps allow you to transpile, bundle, or minify the source code and map tokens from minified code back to the original code, making error messages more understandable.

Source maps are important because they help in debugging by allowing you to see more usable errors instead of cryptic messages, which can significantly ease the development process.

Source maps can be attached to minified files either through a special pragma at the end of the file, known as sourceMappingURL, or through the request header in outgoing HTTP requests.

Ad hoc debugging allows you to see and debug errors as they happen directly within your DevTools, using the exact code loaded inside the JavaScript engine.

Post hoc debugging is a method used by Sentry where errors are collected and analyzed after they occur to determine what code was used when the error was produced.

A release in Sentry is a unique identifier for your build that helps in identifying and managing different versions of files and source maps.

A debug ID is a unique identifier based on the hash of the source map, used to ensure that the minified file and the source map are correctly matched.

Challenges with source maps include lack of identity, path resolution issues, column position discrepancies, and the need for conformance tests to ensure correct implementation.

You can join the TC39 outreach group focused on tooling and source maps, contribute to discussions on GitHub, and provide feedback to help evolve the specification.

Kamil Ogórek
Kamil Ogórek
27 min
05 Jun, 2023

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Source maps allow understanding of transpiled, bundled, or minified code. Debugging with post hoc and debug identifiers helps identify files. Issues with source maps include hash collisions and missing function names. Various techniques can be used to determine the function that caused an error. Source maps can store additional information and improvements can be made to path resolution and column positions. Code points and token positions can differ across browsers. Detecting source maps can be challenging without a standardized JSON schema.

1. Introduction to Source Maps

Short description:

Source maps allow you to transpile, bundle, or minify the source code, or rather, understand how transpiled, bundled, or minified source code was produced. Once we have the source maps, we need to marry them with the minified file. This can be done through a comment at the end of the file or a request header. Ad hoc debugging allows you to see the error as it happens and use the exact code that triggered the error.

Hey, everyone. My name is Kamil Ugurek, and I'm a senior software engineer at Sentry, where currently I'm working in the processing team, where one of the things that we are working on is actually processing source maps. I'm also a core team member of TRPC, and as you can probably tell, I really, really do love source maps, which brings me to the optional title of this talk, which is why source maps are so hard to get right.

We won't go into very details of how source maps work. There's plenty of other sources that you can use for this. However, we need to understand a few very basic ideas. Source maps allow you to transpile, bundle, or minify the source code, or rather, understand how transpiled, bundled, or minified source code was produced. What it does is it's able to map tokens from minified code back to the original code, which then lets you see more usable errors, rather than something like undefined X is not the functional function, or something of that sort. If you want to understand more in-depth how it's actually encoded, there is a great blog post from a friend of mine, Arpad, on his blog. I really encourage you to read through this.

Once we have the source maps, we need to somehow marry together with the original file, or rather, sorry, the minified file. One of the ways to do this is either through the comment at the very end of the file with a special pragma, which is source mapping URL, or the request header, which you attach to the outgoing HTTP request. Now that you have all those both things, what do you see instead of this very cryptic error message is something way more usable, which is something of this sort, like fetch user data instead of seeing a function X. You can see this right inside your DevTools, which is called ad hoc debugging, which means that you see the error as it happens, and you are able to use the exact code, which is loaded inside the JavaScript engine, which can be sure that it's the very same code that triggered the error.

2. Debugging with Post Hoc and Debug Identifiers

Short description:

There is another way to debug called post hoc debugging. The main problem is the lack of identity, as we cannot determine which file was used. Using a release as a unique identifier helps, but it's not enough. We can use debug identifiers, which are unique hashes based on the source maps, to ensure the files are the same.

However, there is another way to debug, which is post hoc debugging, which is what Sentry is actually doing, which means that error happens, it's sent to us, and now we need to figure out what code was actually used when the error was produced, which just happened after the fact.

This brings me to what are the problems with this right now. The first and the biggest one is lack of identity, which means that we are not able to tell what file was actually used. Even if you upload it to us, which first I'll very briefly describe how it works. You produce some files, you have minified files, you have corresponding maps. You push them through one of the tools that we provide, which is the CLI binary or the plugins for your bundlers. If you want, you can use just API call and you store them in Sentry. You need to use the release, which is a unique identifier for your build. And we need this because this is the only way that we can actually have some sort of identity of the file. Because other than the filename, there is nothing else that really makes it special.

In time, you can have 10 bundled MinJS files uploaded for the same release, and we are not able to tell which one is which, basically. That's why we have something which is also called a disk, which you can think of as directories or the bucket for files. So we can have the same filenames for multiple environments, like production or development or staging. However, it's still not enough, because you can basically have the same filename, something like bundled MinJS, which is very, very common, but produced in completely different times, like today and a month in advance, six months later and so on and so forth, and those names can be still the same. You can use hash names, but this is not always possible because some people prefer to just take care of caching using HTTP headers, and it's just sometimes annoying to deal with.

So let's say that we have all the files. Now the error happens. Here is the error. The first frame points to HTTPS DRPC.IO. Assets Bundled MinJS, which is this line, and we don't really care about the host name. We can skip it for the processing part, which leaves us with Assets Bundled MinJS. However, the problematic part is that it has been served from somewhere. The Bundled MinJS lives inside Assets. However, what happens if the structure of your project, when you uploaded it, was something like this, which means that if you just included the dist directory to be uploaded, it means that your file will live under dist front-end Bundled MinJS. This is not the same path and they will not match, which means that we cannot tell that this file that you just served, that actually caused the error is the exact same file that was uploaded. It would be just a guess.

So what's the way around this? How we can make sure that those files are the same files, right? We can actually use something which is called debug identifiers, which in native world, it's very, very common. You have something which is called debug files and in our case we called it debug ID because it's very easy to remember. How it works is the very, very similar way with source mapping URL, however, instead of using paths, you hash the whole source map produced and you use this hash or rather the unique identifier based on the hash, which is UUID in this case, and you stick it inside the mefied file and inside the source map itself. You have to hash source map instead of just the source itself because there is a way that original source code can produce the same hashes for different contents.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Modern Web Debugging
JSNation 2023JSNation 2023
29 min
Modern Web Debugging
Top Content
This Talk discusses modern web debugging and the latest updates in Chrome DevTools. It highlights new features that help pinpoint issues quicker, improved file visibility and source mapping, and ignoring and configuring files. The Breakpoints panel in DevTools has been redesigned for easier access and management. The Talk also covers the challenges of debugging with source maps and the efforts to standardize the source map format. Lastly, it provides tips for improving productivity with DevTools and emphasizes the importance of reporting bugs and using source maps for debugging production code.
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.
Debugging JS
React Summit 2023React Summit 2023
24 min
Debugging JS
Top Content
Watch video: Debugging JS
Debugging JavaScript is a crucial skill that is often overlooked in the industry. It is important to understand the problem, reproduce the issue, and identify the root cause. Having a variety of debugging tools and techniques, such as console methods and graphical debuggers, is beneficial. Replay is a time-traveling debugger for JavaScript that allows users to record and inspect bugs. It works with Redux, plain React, and even minified code with the help of source maps.
From Friction to Flow: Debugging With Chrome DevTools
JSNation 2024JSNation 2024
32 min
From Friction to Flow: Debugging With Chrome DevTools
The Talk discusses the importance of removing frictions in the debugging process and being aware of the tools available in Chrome DevTools. It highlights the use of the 'Emulate a Focus Page' feature for debugging disappearing elements and the improvement of debugging tools and workflow. The Talk also mentions enhancing error understanding, improving debugging efficiency and performance, and the continuous improvement of DevTools. It emphasizes the importance of staying updated with new features and providing feedback to request new features.
Rome, a Modern Toolchain!
JSNation 2023JSNation 2023
31 min
Rome, a Modern Toolchain!
Top Content
Rome is a toolchain built in Rust that aims to replace multiple tools and provide high-quality diagnostics for code maintenance. It simplifies tool interactions by performing all operations once, generating a shared structure for all tools. Rome offers a customizable format experience with a stable formatter and a linter with over 150 rules. It integrates with VCS and VLSP, supports error-resilient parsing, and has exciting plans for the future, including the ability to create JavaScript plugins. Rome aims to be a top-notch toolchain and welcomes community input to improve its work.
Debugging with Chrome DevTools
JSNation Live 2021JSNation Live 2021
11 min
Debugging with Chrome DevTools
Here are some tips for better utilizing DevTools, including using the run command, customizing keyboard shortcuts, and emulating the focus effect. Learn how to inspect memory, use the network panel for more control over network requests, and take advantage of console utilities. Save frequently used code as snippets and use local overrides for easy editing. Optimize images by using a more optimized format like AVIF and track changes in the network panel to see the reduced data size.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
The Clinic.js Workshop
JSNation 2022JSNation 2022
71 min
The Clinic.js Workshop
Workshop
Rafael Gonzaga
Rafael Gonzaga
Learn the ways of the clinic suite of tools, which help you detect performance issues in your Node.js applications. This workshop walks you through a number of examples, and the knowledge required to do benchmarking and debug I/O and Event Loop issues.
Solve 100% Of Your Errors: How to Root Cause Issues Faster With Session Replay
JSNation 2023JSNation 2023
44 min
Solve 100% Of Your Errors: How to Root Cause Issues Faster With Session Replay
WorkshopFree
Ryan Albrecht
Ryan Albrecht
You know that annoying bug? The one that doesn’t show up locally? And no matter how many times you try to recreate the environment you can’t reproduce it? You’ve gone through the breadcrumbs, read through the stack trace, and are now playing detective to piece together support tickets to make sure it’s real.
Join Sentry developer Ryan Albrecht in this talk to learn how developers can use Session Replay - a tool that provides video-like reproductions of user interactions - to identify, reproduce, and resolve errors and performance issues faster (without rolling your head on your keyboard).
Live e2e test debugging for a distributed serverless application
TestJS Summit 2021TestJS Summit 2021
146 min
Live e2e test debugging for a distributed serverless application
WorkshopFree
Serkan Ozal
Oguzhan Ozdemir
2 authors
In this workshop, we will be building a testing environment for a pre-built application, then we will write and automate end-to-end tests for our serverless application. And in the final step, we will demonstrate how easy it is to understand the root cause of an erroneous test using distributed testing and how to debug it in our CI/CD pipeline with Thundra Foresight.

Table of contents:
- How to set up and test your cloud infrastructure
- How to write and automate end-to-end tests for your serverless workloads
- How to debug, trace, and troubleshot test failures with Thundra Foresight in your CI/CD pipelines