Speeding Up Your Node Sever With Rust

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Node has served you well: you spun up a prototype and iterated quickly, keeping up with the evolving requirements of a successful product. Nonetheless, as time goes on, cracks are starting to show up: an endpoint is slower than it needs to be, a data processing job that took seconds now takes almost an hour, and your infrastructure bill is growing too fast compared to the size of your user base. Engineers are starting to whisper: is it time for a rewrite? Should we pause feature development to rebuild everything on more solid foundations? That's an option, but it's expensive.

There's another path: rather than throwing away your entire Node codebase to start over, you analyze your application and isolate the performance-critical bits—the so-called "hot modules" where your application spends most of its time. You will rewrite those in Rust and package them as a Node native extension. This talks shows you how.

This talk has been presented at JSNation 2025, check out the latest edition of this JavaScript Conference.

FAQ

The speaker is Marco Ortevita, the founder and managing director of MainMetal.

The primary focus is on speeding up Node servers by using Rust to optimize performance-critical parts.

Node.js is not particularly efficient for CPU-bound tasks and is limited by being single-threaded, which can max out a CPU core easily.

Marco proposes using native modules written in Rust to offload performance-critical tasks, allowing the rest of the Node.js application to remain unchanged.

Rust is preferred over C because it prevents common pitfalls like manual memory management errors, offering safer and more reliable performance.

NAPI helps in writing native modules in Rust by handling project creation, compilation, and integrating JavaScript with Rust code.

Rust uses compile-time memory management with an ownership system, unlike JavaScript's garbage collection and C's manual memory management.

Rust offers improved performance, especially for CPU-bound tasks, and reduces memory usage without the risks associated with C.

Yes, Rust code can be compiled to WebAssembly, making it reusable in browsers or edge functions.

Node.js is considered easy to start with because it provides a short path from idea to prototype or MVP.

Marco Otte-Witte
Marco Otte-Witte
21 min
16 Jun, 2025

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Talk on improving Node server efficiency with Rust. Node's ease of use but inefficiency for some tasks. Example of a slow express server handling high scores inefficiently. Native modules in Rust provide a safer alternative to C for improving efficiency. Rust empowers developers to build reliable and efficient software, with strong static typing and immutability by default. Rust's result and option enums handle error and value absence cases. The ownership system in Rust ensures safe memory management without manual intervention. The importance of ownership in Rust for memory management and error prevention. Writing native modules in Rust with NAPI for easier project creation and code integration. Exploring the efficiency gains of using Rust's native modules for faster performance and reduced risks in development.

1. Improving Node Server Efficiency

Short description:

Talk on improving Node server efficiency with Rust. Node's ease of use but inefficiency for some tasks. Example of a slow express server handling high scores inefficiently.

Hey, welcome to my talk, speeding up your node server with Rust. My name is Marco Ortevita. I'm the founder and managing director of MainMetal. We're an engineering consultancy. We do a bunch of things to help international teams solve their tech challenges. We do training, team reinforcement, tech stack modernization, and strategic advice. You can find more information on MainMetal.com.

Today, I want to talk about Node. Node is pretty ubiquitous, right, which among other reasons is because it's easy to get started with. And there's typically a short path from idea to prototype or MVP with Node. The problem, on the other hand, with Node is, for example, that is not particularly efficient for some classes of tasks. And let's look at an example for such a task. Assume we have a huge JSON file with scores that might have been submitted by some kind of game. This data.json file here has 1 million such entries as you see on the slide.

Let's then assume we have a simple express server with an endpoint for getting the highest score. That endpoint reads in the JSON file and goes over all of the scores to find the highest one. Obviously, this isn't a really realistic example and nobody would provide a system like this in reality. But for the sake of the talk, we'll assume this is what we have and it needs to be like this somehow. So not surprisingly, this is slow. Running a simple load test with 10 connections to the server over 10 seconds, you see the response time for the 97.5th percentile is more than three seconds. And we managed to handle 49 requests total in 10 seconds.

And we can have a closer look at why this slows well using Clinic.js, which shows in the top left chart that our performance is CPU bound, which means we're constantly using around 100% CPU, which is because we're constantly maxing out one of the CPU cores. So since Node is single threaded, it can't use more than one core. If we're maxing out one core, that is as fast as we can go, really. And also, we're using quite a bit of memory, as you see on the top right. And this chainsaw pattern you see there is typical for garbage collected languages. Lots of things are created in memory and memory usage grows and grows and grows until GC cleans up all unused memory and usage goes down dramatically pretty fast, after which it climbs again. And also, while GC runs, execution of a program is halted, which also doesn't really help with performance, obviously. And we'll ignore the other charts here. So it seems we're at a roadblock.

2. Safe Speed Boost with Rust

Short description:

Native modules in Rust provide a safer alternative to C for improving efficiency. Rust empowers developers to build reliable and efficient software, with strong static typing and immutability by default.

We're maxing out one CPU core, and there's nothing we can do, really, to make the code go faster without larger refactorings. So what can we do? Everybody's favorite answer is always the Big Bang. We throw away everything we have, start fresh with better architecture. Every engineer's dream. Obviously, there are some flaws to this approach, right? Like mainly the fact that it's expensive and risky, like every big project. So you or maybe your manager might think there has to be a better way, right? Like one that's not as wasteful and risky, but still solves the problem. And, of course, there is one which is using native modules, right? Instead of the Big Bang, where you throw away everything and start over, you only replace sort of very precisely the parts of the code that are slow and offload the work that they do to something that is more efficient than JavaScript and thus able to go faster. And that allows you to keep the rest of your application and the overall architecture, which might be completely fine, right, and also easily fast enough. And thus, it limits the investment and the risk of the overall undertaking. And native modules are no doubt a new thing. Really, you've probably encountered them before, for example, in packages like bcrypt, where when you install those, you see these things mentioning a JIP and CXX and all that stuff, and that's essentially the JavaScript packages native extension being built. These native extensions are typically written in C.

Now, the thing with C is using it is a bit like juggling chainsaws, right, which is not something I say, but something Linus Torvalds says, and he must know, right? He must have written millions of lines of code in C. So C is obviously powerful, but it's also really dangerous, right? There's a lot of pitfalls, most prominently making mistakes, doing manual memory management, which can have horrible consequences both for security as well as stability. So you can really only use C when you're an expert or you have an expert team, which essentially nobody has in areas where Node is being used typically. So that means that native modules are a way to speed up things in theory, but in reality, because they require C, they're kind of out of reach. The good news is that what I just said is no longer true because Rust has come to save us from C, and open up a new world of possibilities, right, since we can now write native modules in Rust and thus without exposing ourselves to C's chainsaws, right? So now, before we go into how writing native modules in Rust works, I want to give you a brief intro to Rust, right, to give you an impression of what the language is and what it does to prevent the dangers that exist in C. So Rust's tagline is a language empowering everyone to build reliable and efficient software. And the word empowering really is central here, right? Like Rust empowers large parts of our industry to achieve performance and reliability at levels that were previously out of reach. So let's see how it does that, right? Starting with the basics, Rust is strongly statically typed. You either type things explicitly or let Rust infer types where it can. Unlike in many other languages, variables are immutable by default, right? So trying to change a variable after it has been assigned value, like in this example, results in a compiler error. And by the way, as you can see, Rust has pretty great error messages, right? They're typically very well written and almost always suggest a way to fix the error that is also actually going to work, typically. So while variables are immutable by default, there's often mutability as well. This example compiles fine since XS is explicitly marked as mutable. Rust also has enums that can also hold data. Like in this case, the keypress variant of the event enum holds the key code of the key that was pressed. And these enums are used for two types in the Rust standard library that are used extensively in Rust programs, and thus are worth mentioning here. One is the result enum, which is used for operations that can go wrong. So Rust does not have exceptions that can be thrown at runtime.

3. Rust's Error Handling and Ownership

Short description:

Rust's result and option enums handle error and value absence cases. The ownership system in Rust ensures safe memory management without manual intervention.

So you use the result enum to indicate if things went wrong at runtime. Result has two variants, OK and Error, and is generic over the types of the values associated with these variants. In this example, we have a try function where that returns a result. We're using a U16 number for the OK case and a custom error strike for the Error case. And we're calling the function. We can use Rust's match statement to handle the two possible cases, right? The Error case where things work fine, and we receive the value, as well as the Error case. And we have to handle both cases, right? The compiler will force us to at compile time.

Similar to the result enum is the option enum, which is used to represent the presence or the absence of a value. It has two variants, sum and none, and is generic of the type of the value that's associated with the sum variant. In this example, the getValue function returns an option of U16. And when we call the function, we can use a match statement again to handle the two cases. Either we get some U16 or we get none. And again, we have to handle both cases. So this is very different from languages like JavaScript and others where any reference could also always implicitly be null, right? On JavaScript, it could even be null or undefined for some strange reason. And you can only know for sure at runtime whether the value you get is null or not. And you need to check. And if you forget to check, you get an error. The option type makes that explicit, essentially, which allows the compiler to essentially prevent these kinds of null pointer exceptions at compile time already.

Finally, one of Rust's main mechanisms to prevent these chainsaws is its ownership system. It's a bit of a complex topic, and everybody who comes to Rust fresh is scared of it. And I have to admit that's partly justified, at least. So Rust does not use garbage collection for memory management like JavaScript. And it also doesn't require developers to manage memory themselves manually like C. Instead, it does compile-time memory management, essentially. And to enable that, for every value, Rust tracks ownership. And the owner of a value is the variable that refers to that value. And when the scope, the variable was defined at ends, there's nothing that refers to the value anymore, so nothing can access the value anymore. And Rust knows it's safe to release the value from memory. So at compile time, Rust can then automatically add in all of the memory management for you in the right places, removing values from memory in the right places at the right times. And we won't go into a lot of detail, but here's a brief example.

4. Rust's Memory Management and Error Prevention

Short description:

The importance of ownership in Rust for memory management and error prevention.

The variable ferris holds a string as defined in the scope of the main function. So the ferris variable is the owner of that string value. We then pass the value into the uppercase function. By passing the value the ferris variable owns into the uppercase function, the name variable or the argument of the uppercase function becomes the new owner of that value. And Rust knows that at the end of the scope of the uppercase function, when the name variable goes out of scope, the value it points to can be freed because it can't be accessed anymore.

Yet in the main function, we're accessing the value via the ferris variable in the print line after the uppercase function has completed and thus the value has been released. And that can't work, and we get a compile error. In this case, a simple fix for the error would be instead of passing the value and thus passing ownership, passing read-only reference to the value so that ownership stays with the ferris variable and the uppercase function simply borrows the value for some time. And this example here will compile fine.

As I said though, it's complex topic. So just know that Rust doesn't do garbage collection and also doesn't require developers to manage memory manually. Instead, it has that ownership system, which allows it to compile-time memory management for you. So all of these things combined, the ownership system that makes sure that there can't be the typical memory errors like use-after-free and similar as they can occur in C, immutability by default, the result type that enforces proper error handling, the option type that enforces handling of cases where values are absent, all of these things exist to save you from the perils of a language like C. If a Rust program compiles, it's going to run very efficiently and reliably.

5. Writing Native Modules in Rust with NAPI

Short description:

Writing native modules in Rust with NAPI for easier project creation and code integration.

So that means that C's chainsaws are now plush toys, essentially. Or at least this is what ChatGPT thinks the previous video looks like with the chainsaws replaced with plush toys. So that means that writing native modules is no longer an undertaking. There's very likely to lead to team sorting themselves, cutting off their hands with a chainsaw, but some things they can actually dare to do. So now let's do this. Let's actually write a native module in Rust.

Of course, we don't have to start from first principles, and we can rely on the tools to make things easier, in this case, NAPI. NAPI is a package that helps writing native modules in Rust. It handles project creation, compilation, as well as gluing the JavaScript and Rust code together in the end. When you create a package with NAPI's CLI tool, you get a working directory that looks something like this. There's a package.json and a main.js file, as usually in a package, as well as a bunch of Rust stuff, like cargo.toml and cargo.log, which are the equivalent of package.json and the log file of your package manager of choice.

A build.rs file that handled some specific things around building the Rust code, and a source directory with a lib.rs file, which is where your Rust code goes. So now let's write a Rust implementation of the code that finds the highest score in our JSON file from before. At the top, we import some things we need. The deserialized trait from the Serde dependency is needed to deserialize the contents of data.json into a Rust struct. And we need the fs module from Rust standard library to be able to read the file's contents. And we have some nappy specific code that we need to make our Rust function available to JavaScript.

6. Efficiency Gains with Rust's Native Modules

Short description:

Exploring the efficiency gains of using Rust's native modules for faster performance and reduced risks in development.

Not going into detail there because it's complex. And next, there's the item struct, which represents an entry in the JSON file, right? It's score attribute matches the field of the JSON, and we ignore the ID and name attributes from the JSON since we don't need them to calculate the high score. And then, finally, we have the high score function itself, which reads in the file's contents, deserializes a vector of item structs from the JSON, and finally iterates of all of those elements in the vector and finds the maximum score. I won't go into more details here. I just want to mention one thing. This code uses unwrap in a number of places, which is actually bad, and I'm just doing it here to keep the code simple. So you see the readToString and the fromString and the maxByKey functions all return results, right, meaning they can fail, and we should really handle both the okay and error cases, as I showed before. Calling unwrap on those results simply means fail at runtime in case of an error, which is something nobody should be doing in production code, so shame on me for showing you this, but because this is an example and we want to keep the code simple, we're doing it, but just remember, don't actually do this, right?

Anyway, having this Rust code, we can build it, right? We don't need to worry about the details here. An app configures it all for you, and once it's built, we can use it easily from our express server in JavaScript, right? So here we import in line two the Rust implementation, and then in the high-score endpoint, we use the Rust function to calculate the high score instead of the previous JavaScript implementation, so the changes in the JavaScript code are pretty minimal, and we can use the Rust function completely transparently and keep the overall architecture as it is, right? The effect of this, though, is a substantially faster server, right? The response time for the 97.5th percentile is now about 1.7 seconds, right, almost twice as fast as before, and instead of 49 requests total, we now handle 122, right, more than twice as many. Looking into things a bit deeper, we're still seeing, we're still maxing out the CPU, of course, right, since this is a CPU-bound task, and that's always going to be the case, but we're using the CPU more efficiently, right? So we are maxing it out for a shorter period of time, or for the same amount of time for the 10-second benchmark, but we are able to handle more requests at the same time.

It's also important to note we're still using just one core here, right? Since Rust is multithreaded, we could relatively easily change the code to take advantage of more cores, increasing the performance gain even more and, like, by quite a margin. And we're also using much less memory, right, and memory usage is stable since there's no garbage collection, right? You'll see it's essentially a flat line most of the time. So while this is a pretty unrealistic example, you'll see there's obviously quite a substantial difference in performance between Node and Rust, and it was relatively easy to unlock sort of that performance gain, right? And also there's minimal risks. Also, there's minimal risk for the problems we would run into with C since Rust protects us from those, as I showed before. So let's have a brief look at how this works under the hood. In package.json, we have a new build script, right, that nappy generated, which we used before to compile the code. So that compiles the Rust code into the binary native module. In this case, that's jsnation-demo-darwin-arm64.node. It also generates an index.js file via which the native model can be used from JavaScript, right? Remember in the previous example that I showed, we imported the Rust function from index.js, not from that .node file. And the index.js file simply loads the right binary for the current platform, right? On my Mac, the platform is ARM64, so this would load the jsnation-demo-darwin-arm64.node file we saw before. So if you're setting out now to write native modules in Rust, there are some things to consider. While native modules are great for making slow things faster, obviously not everything that is slow will benefit. I.O. bound tasks like reading from a database, for example, will not benefit since your code isn't what? Slow, but the external resources, right? So making your code faster is not going to have much of an impact. So analyze carefully before you set out to write native modules. Typical scenarios for native modules are CPU bound workloads, obviously like in the example I shared. There's also a bit of an overhead when context switching between Rust and JavaScript, and in particular, moving data structures across the boundaries can be expensive, right? Because data will have to be transcoded from Rust representations to JavaScript representations and the other way around every time, and that can be quite costly and result in some things even being slower than a pure JavaScript implementation. And last, Rust is also a great language to compile to JavaScript. To compile to WebAssembly. Its constraints make it a great fit for WebAssembly and the WebAssembly tooling in Rust is pretty solid as well, right? So that means that the Rust code you write for using it to native modules will also, in theory, be able to maybe reuse it in the browser or in edge functions as WebAssembly. And that's just something worth keeping in mind when starting to introduce Rust into your code base. So I hope you all become fans of Rust now, and your first native module in Rust might just be the first step on a longer journey sort of towards more and more Rust, right? You might write more and more native modules in Rust, maybe reuse the code in WebAssembly, and eventually deploy your first Rust microservice, and maybe eventually your journey with Rust takes you to EuroRust, Europe's leading Rust conference, which we happen to organize. This year, we'll be in Paris, October 9 and 10, and I hope to see many of you there. With that, I end. Thanks a lot for having me. I hope you all got excited about Rust. Bye.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
How React Compiler Performs on Real Code
React Advanced 2024React Advanced 2024
31 min
How React Compiler Performs on Real Code
Top Content
I'm Nadia, a developer experienced in performance, re-renders, and React. The React team released the React compiler, which eliminates the need for memoization. The compiler optimizes code by automatically memoizing components, props, and hook dependencies. It shows promise in managing changing references and improving performance. Real app testing and synthetic examples have been used to evaluate its effectiveness. The impact on initial load performance is minimal, but further investigation is needed for interactions performance. The React query library simplifies data fetching and caching. The compiler has limitations and may not catch every re-render, especially with external libraries. Enabling the compiler can improve performance but manual memorization is still necessary for optimal results. There are risks of overreliance and messy code, but the compiler can be used file by file or folder by folder with thorough testing. Practice makes incredible cats. Thank you, Nadia!
Optimizing HTML5 Games: 10 Years of Learnings
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Optimizing HTML5 Games: 10 Years of Learnings
Top Content
Watch video: Optimizing HTML5 Games: 10 Years of Learnings
PlayCanvas is an open-source game engine used by game developers worldwide. Optimization is crucial for HTML5 games, focusing on load times and frame rate. Texture and mesh optimization can significantly reduce download sizes. GLTF and GLB formats offer smaller file sizes and faster parsing times. Compressing game resources and using efficient file formats can improve load times. Framerate optimization and resolution scaling are important for better performance. Managing draw calls and using batching techniques can optimize performance. Browser DevTools, such as Chrome and Firefox, are useful for debugging and profiling. Detecting device performance and optimizing based on specific devices can improve game performance. Apple is making progress with WebGPU implementation. HTML5 games can be shipped to the App Store using Cordova.
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
Workshop
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
Node.js Masterclass
Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Top Content
Workshop
Matteo Collina
Matteo Collina
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.

Level: intermediate
Build and Deploy a Backend With Fastify & Platformatic
JSNation 2023JSNation 2023
104 min
Build and Deploy a Backend With Fastify & Platformatic
Top Content
WorkshopFree
Matteo Collina
Matteo Collina
Platformatic allows you to rapidly develop GraphQL and REST APIs with minimal effort. The best part is that it also allows you to unleash the full potential of Node.js and Fastify whenever you need to. You can fully customise a Platformatic application by writing your own additional features and plugins. In the workshop, we’ll cover both our Open Source modules and our Cloud offering:- Platformatic OSS (open-source software) — Tools and libraries for rapidly building robust applications with Node.js (https://oss.platformatic.dev/).- Platformatic Cloud (currently in beta) — Our hosting platform that includes features such as preview apps, built-in metrics and integration with your Git flow (https://platformatic.dev/). 
In this workshop you'll learn how to develop APIs with Fastify and deploy them to the Platformatic Cloud.
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Building a Hyper Fast Web Server with Deno
JSNation Live 2021JSNation Live 2021
156 min
Building a Hyper Fast Web Server with Deno
Workshop
Matt Landers
Will Johnston
2 authors
Deno 1.9 introduced a new web server API that takes advantage of Hyper, a fast and correct HTTP implementation for Rust. Using this API instead of the std/http implementation increases performance and provides support for HTTP2. In this workshop, learn how to create a web server utilizing Hyper under the hood and boost the performance for your web apps.