Not going into detail there because it's complex. And next, there's the item struct, which represents an entry in the JSON file, right? It's score attribute matches the field of the JSON, and we ignore the ID and name attributes from the JSON since we don't need them to calculate the high score. And then, finally, we have the high score function itself, which reads in the file's contents, deserializes a vector of item structs from the JSON, and finally iterates of all of those elements in the vector and finds the maximum score. I won't go into more details here. I just want to mention one thing. This code uses unwrap in a number of places, which is actually bad, and I'm just doing it here to keep the code simple. So you see the readToString and the fromString and the maxByKey functions all return results, right, meaning they can fail, and we should really handle both the okay and error cases, as I showed before. Calling unwrap on those results simply means fail at runtime in case of an error, which is something nobody should be doing in production code, so shame on me for showing you this, but because this is an example and we want to keep the code simple, we're doing it, but just remember, don't actually do this, right?
Anyway, having this Rust code, we can build it, right? We don't need to worry about the details here. An app configures it all for you, and once it's built, we can use it easily from our express server in JavaScript, right? So here we import in line two the Rust implementation, and then in the high-score endpoint, we use the Rust function to calculate the high score instead of the previous JavaScript implementation, so the changes in the JavaScript code are pretty minimal, and we can use the Rust function completely transparently and keep the overall architecture as it is, right? The effect of this, though, is a substantially faster server, right? The response time for the 97.5th percentile is now about 1.7 seconds, right, almost twice as fast as before, and instead of 49 requests total, we now handle 122, right, more than twice as many. Looking into things a bit deeper, we're still seeing, we're still maxing out the CPU, of course, right, since this is a CPU-bound task, and that's always going to be the case, but we're using the CPU more efficiently, right? So we are maxing it out for a shorter period of time, or for the same amount of time for the 10-second benchmark, but we are able to handle more requests at the same time.
It's also important to note we're still using just one core here, right? Since Rust is multithreaded, we could relatively easily change the code to take advantage of more cores, increasing the performance gain even more and, like, by quite a margin. And we're also using much less memory, right, and memory usage is stable since there's no garbage collection, right? You'll see it's essentially a flat line most of the time. So while this is a pretty unrealistic example, you'll see there's obviously quite a substantial difference in performance between Node and Rust, and it was relatively easy to unlock sort of that performance gain, right? And also there's minimal risks. Also, there's minimal risk for the problems we would run into with C since Rust protects us from those, as I showed before. So let's have a brief look at how this works under the hood. In package.json, we have a new build script, right, that nappy generated, which we used before to compile the code. So that compiles the Rust code into the binary native module. In this case, that's jsnation-demo-darwin-arm64.node. It also generates an index.js file via which the native model can be used from JavaScript, right? Remember in the previous example that I showed, we imported the Rust function from index.js, not from that .node file. And the index.js file simply loads the right binary for the current platform, right? On my Mac, the platform is ARM64, so this would load the jsnation-demo-darwin-arm64.node file we saw before. So if you're setting out now to write native modules in Rust, there are some things to consider. While native modules are great for making slow things faster, obviously not everything that is slow will benefit. I.O. bound tasks like reading from a database, for example, will not benefit since your code isn't what? Slow, but the external resources, right? So making your code faster is not going to have much of an impact. So analyze carefully before you set out to write native modules. Typical scenarios for native modules are CPU bound workloads, obviously like in the example I shared. There's also a bit of an overhead when context switching between Rust and JavaScript, and in particular, moving data structures across the boundaries can be expensive, right? Because data will have to be transcoded from Rust representations to JavaScript representations and the other way around every time, and that can be quite costly and result in some things even being slower than a pure JavaScript implementation. And last, Rust is also a great language to compile to JavaScript. To compile to WebAssembly. Its constraints make it a great fit for WebAssembly and the WebAssembly tooling in Rust is pretty solid as well, right? So that means that the Rust code you write for using it to native modules will also, in theory, be able to maybe reuse it in the browser or in edge functions as WebAssembly. And that's just something worth keeping in mind when starting to introduce Rust into your code base. So I hope you all become fans of Rust now, and your first native module in Rust might just be the first step on a longer journey sort of towards more and more Rust, right? You might write more and more native modules in Rust, maybe reuse the code in WebAssembly, and eventually deploy your first Rust microservice, and maybe eventually your journey with Rust takes you to EuroRust, Europe's leading Rust conference, which we happen to organize. This year, we'll be in Paris, October 9 and 10, and I hope to see many of you there. With that, I end. Thanks a lot for having me. I hope you all got excited about Rust. Bye.
Comments