And notice that if I run the code again, for warm starts, it's also very fast. We have 45 milliseconds here, 16, 13, 14, 9, etc. So there's also no sacrifice in warm performance, and in fact, it can be up to two times less than the Node.js equivalent, mainly due to the fact of the lack of a just-in-time compiler and a simpler engine for less complexity. Also notice that I didn't change a single line of code. What I simply did was to change the runtime settings here, and I have prepared this demo by putting the LLRT bootstrap binary here. So I simply downloaded LLRT, renamed the binary bootstrap, and put it together with my sample code here.
Okay, let's get back to the presentation. So what can be good use cases for LLRT? So good use cases can be latency-critical applications, high-volume functions, data transformation, integration with different AWS services, and server-side rendered React applications can even be executed with LLRT. And also applications consisting a lot of with a lot of glue code. What I mean by this is that applications that integrate to other third-party sources or other AWS services, that is the glue between one service to the other. When it's not good to use LLRT is when you're doing simulations or handling hundreds or thousands of iterations in loops or doing some sort of you know, multicast operations or transferring large objects or large sets of data in tens or even hundreds of megabytes. This is where the Just-in-Time compiler really shines, which is a feature that is not available in LLRT.
But what is best right now is to measure and see, and I'm pretty confident that a lot of your use cases would benefit from running LLRT. And again, how can it be so fast? So it has no JIT, and the AWS SDK is optimized specifically for Lambda. This means that we have removed some of the complexities that involve database SDK, such as we cache object creation, we convert the SDK to QuickJS bytecode, and we have a lot such as cache object creation, we convert the SDK to QuickJS bytecode. And we're leveraging some other techniques that optimize for code starts on Lambda. For instance, we do as much work as possible because the Lambda runtimes have CPU boost when they're being initialized. We also write most of our code in Rust. So in fact, we have a policy that says as much as possible should be written in Rust. So the more code we can move from JavaScript to Rust, there will be a performance benefit. So in contrast with Node.js, almost all of its APIs are written in JavaScript. And they heavily depend on the Just-in-Time compiler of the V8 engine to achieve great performance. Since we're lacking this capability and writing the most of the code in Rust, we get performance benefits while still keeping the size down and get an instant performance benefit without having to rely on the JIT profiler to optimize the code over longer running tasks. And basically, everything that you're using in LLRT is written in Rust. So the console, the timers, crypto, hashing, all of that is written in Rust. There's just a small JavaScript layer on top of that. And of course, your code will be running JavaScript as well. And it's also, again, very lightweight. It's only a few megabytes, and we try to keep it as lightweight as possible, minimizing dependencies, but also minimizing complexity. So what's the catch? This is a very high-level compatibility matrix.
Comments