Let me show you how they work, because you can write your own too. Before we go over to the build targets, because you're talking about how fast it is, how does that impact with SEO? Does there some sort of relationship with that? Yes, yes, there is a relationship with that. I mean, SEO is a bit magical. So there'll be lots of people who will draw the lines and make the connection in different senses. But my rule of thumb is that search engines are trying to connect users with good experiences. So they want users to get what they're looking for. So that's how you think, I've had a success today, I found what I was looking for. And so that's why search engines tend to deprioritize sites that are slow to load, because it tends to mean disengagement. A search engine measure engagement is determined by how long you stay on the page, whether you stay at all, whether you interactive, you navigate further. And if as a user, you are loading a page which takes a long time, you're much more likely to close that tab and move on to the next one. Which means all of the data is going to drive search engines, whether or not they explicitly say this to start with, to deprioritize slow sites. And I think in general, that that's a fair comment. Although, I guess, if you have a slow site which is more targeted to the need of the user, that's a countervailing factor. So in general, you want a site that's as fast as you possibly can make it. In terms of the key metric, it's that time to first bite. There are other metrics as well that really matter too, such as time to interactive. So after everything does get to your browser, how long is it going to take to be fully responsive to you as a user? And so, if you have loads of client-side libraries, that's gonna make a difference too. In terms of Nitro, Nitro is optimized to work really well in conditions like serverless or edge rendering, because it only loads what's required. So if I were to, I'm just gonna edit the code of my app here, okay? My app here, okay? So, we're going to do console.time.start, console.time.end.start. I just wanna see how long it takes to actually just load the server. So, our cold start then, is five milliseconds. That's how long it took to start the server, now responding to requests. If you want to see how long it takes to say, respond to the request, we can actually hit say, just look at the headers, because I've enabled timing middleware here. So we have a server timing header, which you can inspect and view more graphically, nicely in, say, Chrome or Firefox. But it tells exactly how long it takes to make this happen. So, it's basically four milliseconds to respond to that request. So, now obviously, if you start making API requests, that's gonna be your main time spent. Because if you're requesting or connecting to an external database or something like that, it's gonna obviously take time. And particularly, if you waterfall it, so you first make one request and you wait for a response and then you make another request, that's gonna delay things for you. But out of the box, we're talking, what, five milliseconds start in four millisecond response to a request. And that was not an API root, that was rendering a Vue app. It wasn't a very complicated one. If I had looked at just the health check, for example, that's the same four milliseconds. But maybe we just can't get a lot faster than that. Anyway, it's really fast. It's aimed to work really well in edge rendering, in serverless functions where you have a high premium on startup time and cold start. If you were, for example, to, which means you can actually use it more. The cold start. For example, with Nuxt 2 starting without Nitro, it takes over a second to start, which might not seem like much, but that's 100 times more than Nitro. That makes a big difference when you have to initialize. Obviously, it takes even longer to return the response. It makes a huge difference. Normally people wouldn't even think of having a server side rendered app. You might just have a static one rather than render it on a serverless function, whereas Nitro means you can revisit that assumption and think about maybe serving more dynamic or more personalized results relevant to the user on before timing might've ruled that out. I see there's a question about static mode. So I can tell you a little bit about that, but you can't try it out now. So I couldn't do this today because at the moment we just build a server. But the plan is the, I should say, look you might still need a warmer to keep the, like it depends on your platform because they have their own overhead in terms of starting up the, but completely agree with what Toby has just said, by the way, it is only one part of the thing. In terms of the Lambda, the Nitro side is really fast, but you also do have to consider the fact that if it's in cold storage, it has to be downloaded to the server and it has to be unzipped and then it has to be run. So there's all of that overhead too. So it's not instantaneous. Cloudflow Workers does claim to have a zero millisecond cold start because of using V8 isolates. So there are, anyway, you should try it out and basically see how it works in your particular environment. Like what are the trade-offs and what's the responsiveness. But one benefit you do get with Nitro is that we have one function. So rather than a different function for every root, we have the single function which can render any root. And I think it's basically the sweet spot because what it means is, you're much less likely to have a cold start. Because for example, if you have a health check, that's gonna warm up only the health serverless function. If you have an API hit, it's gonna warm up only that function. And if you have another one on your main site, it's gonna warm up only the view app. Whereas if you render them all with single entry point, which dynamically loads what it needs, you don't have the disadvantage of having a enormous monolithic entry point, but you do have the benefit of having a function which is more likely to be warm. I mean, if you do get a hundred simultaneous requests, you're still gonna need to warm up a hundred simultaneous functions. So it's not like it can handle everything. But basically mathematically, if you've got five functions versus one function, somewhere like five times less likely to need to warm up a function. I have also one question about incremental static site generation. Yeah. And about the current status, because I know this is like a platform thing and a framework thing, because for example, we need to host like 4 million pages and basically it's too much. And this also gets to the question about static site generation. So the reason that we haven't rolled out Max 3 with static site generation out of the box is that we're abstracting it into something that we're calling the caching layer. So the concept is that for every root or possible response to a request, it's possible to have a different strategy.
Comments