Porting Turborepo to Rust

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Everybody loves talking about Rewriting in Rust, but rewrites are hard. It’s far too easy to fall into second system effect, shipping delays, and end up losing all your users. How can you move to Rust while still shipping features and keeping your users happy? Easy, instead of rewriting, port! In this talk, we’ll go over how we ported Turborepo from Go to Rust using three different strategies: a Rust shim that wrapped the existing Go code; a Rust-Go-Rust sandwich to incrementally port dependencies; and finally a completely parallel Rust implementation. We’ll talk about why we chose these strategies, how we went about implementing them, and how we shipped them to users.

This talk has been presented at JSNation US 2024, check out the latest edition of this JavaScript Conference.

FAQ

Turbo Repo is a build system for JavaScript that coordinates different packages within a monorepo. It builds a package dependency graph and orchestrates tasks in parallel while caching outputs, allowing for efficient reruns by reading from the cache.

Turbo Repo was ported to Rust due to alignment and ecosystem reasons. Rust handles details like file permissions better across different operating systems, and it supports excellent JavaScript projects that the team wanted to leverage for better tooling.

Challenges included dealing with Alpine Linux's lack of glibc, cross-compilation issues between Rust and Go, and maintaining feature parity during the incremental porting process.

The Rust shim is a thin layer of Rust wrapping the Go code, allowing Turbo Repo to function as a regular binary in a monorepo by executing the local version installed. It facilitates the incremental porting process by bridging Rust and Go.

The team used Zig, specifically its C compiler ZigCC, to simplify cross-compilation. This approach helped overcome challenges associated with Rust and Go's handling of cross-compilation.

The Go sandwich strategy involves porting individual Go dependencies to Rust and linking them into the Go binary. It uses Protobuf for communication between dependencies, allowing for a step-by-step porting process while maintaining system functionality.

While the Rust-Go sandwich and shim resulted in some performance impact, the full Rust implementation showed a slight performance improvement, although not significantly higher due to Turbo Repo's syscall overhead and caching strategy.

Post-porting, tools like Biome for parsing errors, SWC and Oxy for Turbo Trace, and NAPI for Rust-JavaScript interoperability were utilized to enhance Turbo Repo's functionality and integrate it better with Vercel.

Porting is favored over rewriting to avoid losing users and to continue shipping features and bug fixes. It allows for incremental changes without the risk of halting updates or getting stuck in a never-ending race between old and new codebases.

Nicholas Yang
Nicholas Yang
25 min
18 Nov, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Today's Talk is about porting to Rust, specifically the experience of porting Turbo Repo, a build system for JavaScript, from Go to Rust. The speaker discusses the challenges faced during the porting process, such as file permission code discrepancies and issues with Alpine Linux. They explain the approach taken, including using a Rust shim and porting individual Go dependencies to Rust. The Talk also covers the limitations and challenges encountered during the porting process, as well as the benefits of leveraging the Rust ecosystem. The speaker discusses the considerations of rewriting versus porting and the importance of institutional knowledge. They also touch on the performance impact of the porting process and the improvements achieved by moving to a fully Rust implementation.
Available in Español: Porting Turborepo to Rust

1. Introduction to Porting to Rust

Short description:

Today, I'm here to talk about how to port to Rust. Turbo Repo is a built system for JavaScript that coordinates different packages within a monorepo, builds a package dependency graph, and orchestrates tasks in parallel. It's about 70,000 lines of Go, compiled for x86 and ARM on Mac, Windows, and Linux. We decided to port for alignment reasons.

♪♪ Yeah, if you want any chili oil info, I've literally written three blog posts about it. You can look them up. Let's see. Yep, it's great. So, who here likes to write Rust? Yay, some of you. And because of that, well, there's become this movement of rewriting in Rust, and it's great. People have rewritten everything from compilers to browsers in Rust. But rewrites are hard. They come with a lot of challenges. You have to learn this new technology. You have breaking changes, bugs, and you can end up losing users in the process.

And so today, I'm not here to talk about rewriting in Rust. Instead, I'm here to talk about how to port to Rust. And this may seem like a minor distinction, but it's really about moving pieces incrementally without breaking changes and keeping the same fundamental behavior in architecture. You're not trying to reinvent the wheel here. And so, here's how we did it with Turbo Repo. So, first, what is Turbo Repo? Turbo Repo is a built system for JavaScript. It basically lets you coordinate the different packages within your monorepo, and it builds a package dependency graph, and then from there, it orchestrates the different tasks that need to be run. And it does this in parallel, and then it caches the outputs. So, if you want to run the same task again, you'll just read from the cache instead of running it.

So, in a traditional setup, it may look like this, where you're just running everything sequentially, but with Turbo Repo, it's in parallel and, of course, cached. So, underneath the hood, what this is basically doing is it's loading a bunch of config. It's then spinning up a package dependency graph. So, again, it's which packages depend on which. You could have like a UI library that's used by both a web app and a docs app, and then from there, it spins up the task dependencies. So, maybe you have a lint task that depends on a build task, and it then executes these in parallel, caches the outputs, and prints a summary to the user. And it's about 70,000 lines of Go, and it's compiled for six different targets. That's x86 and ARM across Mac, Windows, Linux. So, why did we decide to port? Well, the first reason is really around alignment. You see, Go is really great if you're writing a web server running on Linux.

2. Challenges and Starting with Rust Shim

Short description:

Go lies about file permission codes on Windows, but Rust gets it right. Rust leverages excellent projects like Biome, SWC, NAPI, and Auxy to build better tooling. We started incrementally, adding Rust functionality to TurboRepo through the Rust shim.

It assumes a lot of Unix-isms. And so the classic example of this is file permission codes. Go will let you set a Unix-style file permission code. But the only issue is, on Windows, that concept doesn't exist. And so when you set this code, Go will basically just lie to you. It'll be like, yeah, sure, I set it, and it won't do anything. And that shows up in every little detail.

And so, with Rust, it gets these details right. It makes sure that, if you want to set a Unix-style file permission code, you have to be running on Unix. Otherwise, there's a separate Windows API. And the second reason is really around ecosystem. You see, there's a lot of excellent projects out there doing great work supporting JavaScript on Rust. This could be Biome, which is an excellent Winter and Formatter, SWC, NAPI, Auxy. And we want to leverage these platforms to build better tooling.

So how'd we start? You could imagine a world where you just start from the ground up. You build a command line, you build a package graph, you build a task graph, and yeah, you're well on your way. But there's a lot of features. There's a lot of things to build. And it's important to remember, this is a mature product that is used by a lot of people. You can't just remove these features. And so we didn't take that approach. Instead, we decided to keep shipping features and really do things incrementally. Our goal was to actually have the Rust code add functionality to TurboRepo.

And so we decided to start with a very basic feature. And it's facilitated by what I call the Rust shim. The Rust shim allows us to implement global turbo. Global turbo allows you to use TurboRepo as a regular old binary. But inside your monorepo, it will find the local version of TurboRepo that's installed and execute that. You can kind of think of it as similar to like core pack, where core pack lets you run a global NPM binary, but it will use the specified version in your package.json. And the way that we implemented this was with what we called the Rust shim, which is essentially a thin layer of Rust wrapping the Go code.

3. Porting Issues with Alpine Linux

Short description:

Go code compiled as a shared library linked with C and executed by a Rust shim. Command line parsing ported using clap library. Serialized args to JSON and sent to Go. Issues encountered with Alpine Linux due to missing glibc. Attempted to use muscle as a substitute, but encountered segfault. Found a nine-year-old GitHub issue explaining the incompatibility of Go as a shared library with muscle. Had to reconsider the approach.

The Go code will be compiled as a shared library linked with C and executed by a little Rust shim. And after this, it was pretty simple to port the command line parsing, since we already had to do a little bit for global turbo. And so we parsed the args using a great library called clap.

From there, we serialized those args to JSON and sent it to the Go. And the reason we used pic.json here is that we didn't really want to write a bunch of C types encoding the different args. C is infamously not a very portable language. And so it was much easier to just send a big string.

So with that, we decided to ship it. Except nothing goes to plan. So here's some issues we ran into. The main one was with Alpine Linux. If you guys are not familiar, Alpine is a distro of Linux that is often used in cloud computing. And one thing that makes it great is that it's super lightweight.

And one thing that makes it lightweight is that it doesn't come with glibc. glibc is an implementation of the C standard library that a lot of binaries dynamically link. Basically, the binary will call out to a separate version of glibc. However, since glibc is not on Alpine, and in fact, since we want to support really old versions of Alpine, we couldn't use their version of glibc that you can install. And so we decided instead to use muscle, which is a variant of the C standard library you can statically link. You could basically just include it in the binary.

However, when this happened, we segfaulted almost immediately. And in fact, this was a pretty big mystery because when we investigated it, the segfault was literally coming from the middle of the Go runtime and the stack was completely corrupted. And it was basically just like this voodoo error. There was no explanation for what was happening. And eventually, we found this nine-year-old GitHub issue that basically explains that you can't use Go as a shared library with muscle. It's still open, by the way. You can go look at it. Please upvote my comment. And yeah, so this was a showstopper. We had to support Alpine, and we couldn't take this approach. So we had to go back to the drawing board a little.

4. Porting the Go Sandwich

Short description:

Two binaries: Rust calls Go. Go sandwich: Port individual Go dependencies to Rust and link them into Go binary. Use protobuf for communication. Regimented process allows testing between Rust and Go versions.

And what we came up with was two binaries. Basically the Rust binary calls the Go binary and passes the args. And that worked pretty well. So with that, we were able to ship the Rust gem in Turbo 1.7.

So now let's talk about the Go sandwich. So at this point, we managed to port a bunch of small stuff like log in, log out. But we couldn't figure out how to port the main execution pipeline. Because almost immediately, we have to spin up the package dependency graph. And we didn't really want to send that via JSON because graphs are not the most serializable things.

And so what we decided on was what we called the Rust-Go-Rust sandwich, or the Go sandwich for short. Basically, we would take individual Go dependencies, port them to Rust, and link them into the Go binary. While we couldn't link Rust to Go, we could link Go to Rust. And so this worked pretty well. And we'd use protobuf to communicate between these dependencies to again, avoid writing too much C.

And this also made for a really nice regimented process. We could start by having the Go code. We could write the Rust version. We could test between the two. In fact, we could literally test by just calling the Rust version, and then calling the Go version, and comparing the outputs. We can merge the code, but flag the Rust version off. And then we could flag on the Rust version. And at any point, if we had any bugs, we could just flag back, and revert back to the Go version.

5. Porting Challenges and Limitations

Short description:

Ported pieces: lock file analysis, file hashing. Cross compilation challenges with Rust and Go due to C dependencies. Orchestrate cross compilation with ZigCC. Shipped Go sandwich in Turbo 1.8.6. Limitations with handling async calls using the sandwich.

And so this worked great. We were able to port a bunch of pieces, such as our lock file analysis and file hashing, which is important for cache artifacts and so on. And so let's ship it. Accept more issues.

So let's recall. We're building for six different architectures. And because of that, we have to do what's called a cross compilation. Basically, we have to build for a different architecture than the one that we're compiling on. So maybe you're targeting Windows x86 while running on a Linux ARM. And normally, Rust and Go are both pretty good at cross compilation. But together, they're terrible, because Rust has all these C dependencies that it uses.

And these C dependencies, normally, Rust is smart enough to cross compile. But because Rust isn't building a binary, Go is. Go has to do that final cross compilation linking. And Go doesn't really understand what's going on either. And so you have to orchestrate this yourself, and you have to find a good C cross compiler, which is not easy. It often requires installing a bunch of different versions and finding the relevant libraries and creating sys routes. And it's just a huge, messy process. Except if you use Zig. Yep, more hacker news bait. And Zig, specifically, comes with a great C compiler called ZigCC that lets you cross compile like a breeze. It's great.

And so with that, we shipped the Go sandwich in Turbo 1.8.6. So finally, let's talk about the run outline. At this point, we were reading the limits of what the sandwich could handle. Specifically, we didn't really want to handle async at all, because with the sandwich, we were doing all synchronous calls. So it's literally just the Go would call the Rust, the Rust would do some work, and then it would return. But with async, the Rust would persist, and you'd have to figure a way to pull it, and you'd have to keep a Go runtime around and a Rust runtime around. And I don't know how those would interact. And it just seemed like a headache.

6. Building All-Rust Pipeline and Hashing Strategy

Short description:

Built all-Rust execution pipeline. Tested and integrated existing pieces. Used hashing to ensure code trust. Stable hashing between Rust and Go. Language-independent hashing algorithm. Cat and Proto format for identical output. Running tests with Rust code path. Creating burn-down chart of broken tests.

So instead, we decided to build out an all-Rust execution pipeline and stub out the unimplemented parts and then implement them bit by bit. And we add a flag so that you could run this code path. And with that, we're able to...

And what's important to remember here is that we had already built a lot of these pieces. You know, you could argue that this is a rewrite, and you would be right, but we already had built these pieces, and more importantly, we had tested them. We had literally run them on production code. And it's important to remember here that there's a real danger with unused code. If we had just rewritten without the Sandwich or the shim, we would just have a bunch of code written in Rust that was not being used. And code that isn't used is fundamentally code that you can't trust. Sure, you can write a test, but a test isn't going to capture all the nitty-gritty little details that a user will. And so we wanted to figure another way to integrate these pieces into our system and test them. The way we figured out to do this was using hashing.

So hashing is at the core of Turbo. The way it works is when you run a task, Turbo creates all the inputs from the task. This could be the files, environment variables, other tasks, and it computes a single value called the hash. This hash is then used to index into the Turbo repo cache. So if there is an entry there, you know that the task has already been executed, and you can just restore from cache. We decided to keep the hashing stable between both Rust and Go. And so what this meant is we expected that Rust and Go agree on the same hash. If they didn't, then there was a problem. And so this necessitated moving both the Rust code and the Go code to a language-independent hashing algorithm. And this was tricky, but we thought it was worth it because you basically could be sure that both the task graph, the package graph, and the execution order would all be identical from Go and Rust. To do this, we used a hashing format called Cat and Proto. Cat and Proto is great. It's basically a format that is assigned literally byte for byte, and so you know that across different languages, it'll produce the exact same output. If we were to use something like JSON, well, there's a lot of different ways you can serialize JSON. And so we're able to then use this hashing strategy to port a bunch of pieces and keep testing between the two systems.

At this point, we were almost ready to release. We were running our tests with the experimental Rust code path. We started to create a burn-down chart of all the broken tests.

7. Leveraging Ecosystem and Interoperability

Short description:

Fixed bugs and shipped all-Rust version. Leveraged the ecosystem with Biome for parse errors. Implemented Turbo Trace for dependency tracing. Used SWC and oxy for boundaries and Turbo Query. Interoperated between Rust and JavaScript with API. Learned from moving 70,000 lines in 15 months.

Once we fixed those, we were able to dog food for Rust implementation to add for sale. Of course, we have a lot of JavaScript there, so we used Turbo Repo on our monorepos. And once we got to 72 hours of no bugs, we shipped an all-Rust version with a go-fall-back flag in case we ran into bugs. And so with Turbo 1.11, we got to all-Rust. Woo.

The first thing is this amazing PR where I deleted like 70,000 lines of code. This is like the peak of my career, the most satisfying thing I've ever done. And after that, well, we did what we said. We started to leverage the ecosystem. So starting out, we used Biome to create really nice parse errors. You see, we have a lot of JSON config, and we were using Seride to load it, but now we use Biome because Biome comes with a really nice parser. And so it will give you detailed parse errors like, hey, you left an extra comma here. And then we can also take this location info and use it for our own errors. So now if you have an error in your configuration, it'll point to the exact location where it happened, and it will even give you a little tip on how you can fix it.

Next up, we used SWC and oxy to implement what's called Turbo Trace. It allows us to trace through different dependencies. Basically, you can determine which file depends on another file and so on. And this is being used to implement boundaries, which is a new feature that's coming out soon. We have an RFC on it that you can comment on. Basically, it's a feature that allows you to restrict which packages can depend on which and which files can depend on which to create rules for your monorepo. And we also use this for Turbo Query, which is a query implementation for your monorepo. Basically, you can query about the repository structure or using GraphQL.

Finally, we use an API to interoperate between Rust and JavaScript. And this is great because now Vercel can leverage Turbo Repo's code. We can use Turbo Repo to figure out what projects your changes affect. So if you only change something in your docs project, you don't rebuild the whole world. You just rebuild the docs project. And while you could do this before with Turbo Repo, what's nice is because it's built into Vercel, it's done at a very low level, so you don't even spin up the VM if you don't have to. What did we learn? Overall, we did a lot well. We moved about 70,000 lines in 15 months.

8. Porting Challenges and Lessons Learned

Short description:

Shipped features and fixed bugs. Leverage serialization with JSON, Protobuf, and Captain Proto. Team ramped up on Rust. Should have invested in maintenance and build process. Could have shipped nothing quickly. Rust is great for mature products but not for getting things off the ground. Porting is a great way to move without losing users.

And you may be like, 15 months, that's a lot of time. But it was 15 months of shipping features, of fixing bugs. It wasn't 15 months of radio silence. And we didn't get bogged down in this process. It would have been very easy to, like, we're gonna re-architecture this and create the greatest build system ever. No, we didn't do that. We kept it simple. We kept our eyes on the prize. And we moved things quickly. And in that process, we were able to leverage serialization, both with JSON in the shim, with Protobuf in the sandwich, and then with Captain Proto in the run outline. And we managed to get the team ramped up on Rust in the process.

What didn't go well? Well, we could have definitely invested in some maintenance before the port. We had this, like, loose idea that you could refactor while porting. And that quickly turned out to not be the case. So we had some features that were deprecated, but not deleted, that we probably should have removed before we ported. Instead, we had to port these features because we wanted to keep feature parity. And so that was tricky. We also could have invested in build process. Builds got really slow. The Rust go Rust sandwich was very slow to build. And finally, we could have shipped nothing quickly. And what this means is we could have shipped basically a blank version of each porting strategy really early on. Instead, we invested a lot into the process and then shipped. And so this created a lot of stress where if the shipping didn't go well, well, we had a lot of code invested in this strategy. And so we would have to either back out or make it work. And this got tricky.

Finally, here's your Rust versus Go slide. Overall, I would say Rust is really great for, like, mature products where you want all these nitty gritty little details. But it's not great if you want to get things off the ground. Like, it leads to some bike shedding and some just general arguments about, like, oh, should we support non-UTF-8 paths or should we do this little edge case? So I want to leave you with this idea that porting is a great way to move things without losing users and while still being able to ship.

QnA

Porting Considerations and Interoperability

Short description:

When is it better to rewrite compared to port? Porting is painful and tricky, but rewrites are really hard. Good institutional knowledge and a long-term commitment may make rewriting worth it. Some patterns in GO code couldn't be replicated exactly in REST due to the borrow-checker. TurboRepo was amenable to being written in REST. The glibc issue wouldn't have been a showstopper long-term. TypeScript can be ported to REST using a similar strategy, with the help of APIs and serialization for interoperability.

Thanks. So I want to start off with a question I had. So when you were porting instead of rewriting, there was a lot of benefits, but there were a few extra things you had to do if you hadn't rewritten. So for people considering this, when do you think that it's actually better to just rewrite compared to port? Because there could be some extra complexity with porting. That's a good question. I'm not entirely sure. I'm almost tempted to say, like, you should almost always port. Like, porting was painful, it was tricky. At many points it was the most annoying thing, but I think it paid off because rewrites are really hard. Like, so one thing I didn't talk about is, like, if we had gone the rewrite route, like, would we have still kept updating the Go code? Because if we didn't, well, our users would just be stuck and wouldn't get any new features or bug fixes. But if we did, then, well, the REST code would have to catch up. It would be this sort of, like, never-ending race. And so I would say that, like, maybe if you have really good institutional knowledge of the product, you've worked, your team has worked on it for a really long time, and you have the, like, buy-in to spend the next, you know, year or two rewriting, then maybe it's worth it. But otherwise, I'd say you should probably port.

Okay, and let me check the questions here. So from Brian, were there some patterns in the code you couldn't replicate exactly in REST due to borrow-checker? Um, a little bit, nothing that crazy. Like, there were definitely some cases where you'd have to think about the code really carefully, and you would also maybe realize that the GO code had some issues that had never been, like, seen before. Like, I realized while porting some of the caching code that there was, like, a slight race condition that had never been caught in a GO code that the REST code wouldn't allow. And so you did run into those. But overall, TurboRepo was a very amenable project to being written in REST because the lifetimes were not very complicated.

Okay. And next from John, if you knew about the glibc issue beforehand, would you have still chosen to port? Yeah. I mean, you know, it was annoying, but it eventually worked out, and, you know, yeah, it wouldn't have been a showstopper long-term. Yeah. Next question. Is it possible to port TypeScript to REST using this similar strategy? Probably. You could use something like an API to interoperate and then incrementally move things. I think also, like, using serialization could be an interesting technique where you basically just send almost, like, messages over the wire. I think you could do this with any language, and I think also the ecosystem for interoperability is getting better and better. Like, I think there are more tools out there that help you.

Interoperability and Performance

Short description:

Starting out, there was one blog post on interoperation between REST and Go involving raw assembly, which we dismissed. No side-eye for the Go sandwich idea; the team was focused on getting it done. The sandwich and shim had a performance impact, but TurboRepo's syscall overhead mitigated it. Moving to full REST resulted in a slight performance improvement, though not as significant as expected.

Like, when we started out, there was literally, like, one blog post about how to interoperate REST and Go, and it literally involved writing raw assembly, and so we quickly dismissed that one as an option.

Now from Brian, how did your team react to your changing path to delivery? Like, did you get side-eye for the Go sandwich idea? No, I think we were all pretty bought in and just had this idea of, like, we got to, like, just get it done, and, you know, I got to give credit to my manager here where he was really good at, like, just keeping focus and keeping things just doing anything we could to get it moving forward.

Brian has another question. Did you notice performance differences between the different builds? Yeah. So, of course, the sandwich and the shim did result in some performance impact. It wasn't that intense because TurboRepo is mostly syscall overhead. Like, most of it is just traversing the file system and doing all that, and also, like, since we literally speed up builds by caching, if we're a little slower, it's still not that big of a deal because you got the cache. But once we moved to full REST, we did notice a slight performance improvement. It wasn't as big as you'd expect, though. Like, Go is reasonably fast for what we're doing, and, again, syscall overhead.

Well, that's all the questions, but Nicholas will be around afterwards if you have any more questions for him. So, yeah, thanks a lot for giving the talk. Yeah.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Vite: Rethinking Frontend Tooling
JSNation Live 2021JSNation Live 2021
31 min
Vite: Rethinking Frontend Tooling
Top Content
Vite is a next-generation build tool that leverages native ES modules for improved performance. It eliminates the need for bundling and improves hot module replacement. Vite provides an opinionated default configuration while still allowing advanced customization through plugins. It is framework agnostic and can be used for React and other applications. Vite is being adopted by Next.js and Create React App, and integration with Nuxt 3 offers significant speed improvements.
React Compiler - Understanding Idiomatic React (React Forget)
React Advanced 2023React Advanced 2023
33 min
React Compiler - Understanding Idiomatic React (React Forget)
Top Content
Watch video: React Compiler - Understanding Idiomatic React (React Forget)
Joe Savona
Mofei Zhang
2 authors
The Talk discusses React Forget, a compiler built at Meta that aims to optimize client-side React development. It explores the use of memoization to improve performance and the vision of Forget to automatically determine dependencies at build time. Forget is named with an F-word pun and has the potential to optimize server builds and enable dead code elimination. The team plans to make Forget open-source and is focused on ensuring its quality before release.
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.
Debugging JS
React Summit 2023React Summit 2023
24 min
Debugging JS
Top Content
Watch video: Debugging JS
Debugging JavaScript is a crucial skill that is often overlooked in the industry. It is important to understand the problem, reproduce the issue, and identify the root cause. Having a variety of debugging tools and techniques, such as console methods and graphical debuggers, is beneficial. Replay is a time-traveling debugger for JavaScript that allows users to record and inspect bugs. It works with Redux, plain React, and even minified code with the help of source maps.
A Framework for Managing Technical Debt
TechLead Conference 2023TechLead Conference 2023
35 min
A Framework for Managing Technical Debt
Top ContentPremium
Today's Talk discusses the importance of managing technical debt through refactoring practices, prioritization, and planning. Successful refactoring requires establishing guidelines, maintaining an inventory, and implementing a process. Celebrating success and ensuring resilience are key to building a strong refactoring culture. Visibility, support, and transparent communication are crucial for addressing technical debt effectively. The team's responsibilities, operating style, and availability should be transparent to product managers.
Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.

Workshops on related topic

Using CodeMirror to Build a JavaScript Editor with Linting and AutoComplete
React Day Berlin 2022React Day Berlin 2022
86 min
Using CodeMirror to Build a JavaScript Editor with Linting and AutoComplete
Top Content
Workshop
Hussien Khayoon
Kahvi Patel
2 authors
Using a library might seem easy at first glance, but how do you choose the right library? How do you upgrade an existing one? And how do you wade through the documentation to find what you want?
In this workshop, we’ll discuss all these finer points while going through a general example of building a code editor using CodeMirror in React. All while sharing some of the nuances our team learned about using this library and some problems we encountered.
Building a Shopify App with React & Node
React Summit Remote Edition 2021React Summit Remote Edition 2021
87 min
Building a Shopify App with React & Node
Top Content
Workshop
Jennifer Gray
Hanna Chen
2 authors
Shopify merchants have a diverse set of needs, and developers have a unique opportunity to meet those needs building apps. Building an app can be tough work but Shopify has created a set of tools and resources to help you build out a seamless app experience as quickly as possible. Get hands on experience building an embedded Shopify app using the Shopify App CLI, Polaris and Shopify App Bridge.We’ll show you how to create an app that accesses information from a development store and can run in your local environment.
Build a chat room with Appwrite and React
JSNation 2022JSNation 2022
41 min
Build a chat room with Appwrite and React
Workshop
Wess Cope
Wess Cope
API's/Backends are difficult and we need websockets. You will be using VS Code as your editor, Parcel.js, Chakra-ui, React, React Icons, and Appwrite. By the end of this workshop, you will have the knowledge to build a real-time app using Appwrite and zero API development. Follow along and you'll have an awesome chat app to show off!
Hard GraphQL Problems at Shopify
GraphQL Galaxy 2021GraphQL Galaxy 2021
164 min
Hard GraphQL Problems at Shopify
Workshop
Rebecca Friedman
Jonathan Baker
Alex Ackerman
Théo Ben Hassen
 Greg MacWilliam
5 authors
At Shopify scale, we solve some pretty hard problems. In this workshop, five different speakers will outline some of the challenges we’ve faced, and how we’ve overcome them.

Table of contents:
1 - The infamous "N+1" problem: Jonathan Baker - Let's talk about what it is, why it is a problem, and how Shopify handles it at scale across several GraphQL APIs.
2 - Contextualizing GraphQL APIs: Alex Ackerman - How and why we decided to use directives. I’ll share what directives are, which directives are available out of the box, and how to create custom directives.
3 - Faster GraphQL queries for mobile clients: Theo Ben Hassen - As your mobile app grows, so will your GraphQL queries. In this talk, I will go over diverse strategies to make your queries faster and more effective.
4 - Building tomorrow’s product today: Greg MacWilliam - How Shopify adopts future features in today’s code.
5 - Managing large APIs effectively: Rebecca Friedman - We have thousands of developers at Shopify. Let’s take a look at how we’re ensuring the quality and consistency of our GraphQL APIs with so many contributors.
Build Modern Applications Using GraphQL and Javascript
Node Congress 2024Node Congress 2024
152 min
Build Modern Applications Using GraphQL and Javascript
Workshop
Emanuel Scirlet
Miguel Henriques
2 authors
Come and learn how you can supercharge your modern and secure applications using GraphQL and Javascript. In this workshop we will build a GraphQL API and we will demonstrate the benefits of the query language for APIs and what use cases that are fit for it. Basic Javascript knowledge required.
0 To Auth In An Hour For Your JavaScript App
JSNation 2023JSNation 2023
57 min
0 To Auth In An Hour For Your JavaScript App
WorkshopFree
Asaf Shen
Asaf Shen
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.js backend + Vanilla JS frontend) to authenticate users with One Time Passwords (email) and OAuth, including:
- User authentication – Managing user interactions, returning session / refresh JWTs- Session management and validation – Storing the session securely for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.