Out Of Order Streaming (The Secret Powering Modern React)

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Ever wonder how React can "update" a page while it's still loading? We've gotten so lost in the discussion of things like Next.js, Suspense, Signals, Server Components, and Server Actions, that most React devs have totally missed one of React's most recent pieces of magic: Out Of Order Streaming.

This talk has been presented at React Summit US 2024, check out the latest edition of this React Conference.

FAQ

Theo is a software developer who has been creating software development videos for about two years. He is the second-best TypeScript YouTuber.

Theo's talk is about streaming in applications, including React, and the benefits of adding streaming to applications.

Streaming solves the problem of waiting for all content to load by allowing partial content to be rendered immediately, improving perceived site speed and user experience.

Streaming HTML allows browsers to render incomplete HTML as long as there is a closing tag for the most recent element, enabling content to be sent in parts and rendered progressively.

React's suspense feature enables developers to wrap components with a fallback state, allowing for better handling of asynchronous operations and improved user experience by displaying loading states.

Debugging incomplete HTML in Next.js can be challenging, but developers are generally not required to do so, as the framework handles streaming efficiently. It is advised to focus on higher-level debugging tools.

Yes, streaming can improve SEO by allowing faster response times and better time to first Contentful Paint, which can positively impact search engine rankings.

Streaming requires server rendering on each request, but it can reduce the overall server load by consolidating multiple API requests into a single response, improving efficiency.

Next.js handles data fetching and component rendering through server-side rendering and streaming, allowing for partial pre-rendering and dynamic content delivery.

Streaming can improve layout shift core metrics by allowing developers to control the order and size of content being loaded, reducing layout shifts and enhancing user experience.

Theo Browne
Theo Browne
29 min
19 Nov, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
I'm the second-best TypeScript YouTuber. I'm giving a talk on streaming in React and the benefits it brings to applications. Streaming allows for faster perceived loading times by sending partial HTML to the browser, rendering it and waiting for the rest to complete. Server-side loading can cause delays, but this can be improved by caching HTML on a CDN. Streaming HTML in a different order has been a challenge, but there are JavaScript solutions. Optimizing HTML streaming with Suspense and dynamic I.O. can further improve loading times. Using suspense in client-side rendering and addressing challenges with Next.js React model are interesting patterns. Caching can now be done at a more granular level, improving SEO and reducing load on the host server. Rendering on the server is not a big penalty compared to multiple API requests. Thank you all.

1. Introduction to Streaming in React

Short description:

I'm the second-best TypeScript YouTuber. I've been doing software dev videos for about two years. I'm giving my talk today about streaming in React and the benefits of adding streaming to your applications. I'll show you a normal React app and explain the problems with it. The main problem is the waiting time before fetching data. But there's a solution.

♪♪♪ Had to make a quick correction. He didn't read the copyright. I'm the second-best TypeScript YouTuber. Just wanted to clarify. Not gonna steal valor from Matt Pelcock. He's very good at what he does.

But for those who don't know me, I'm Theo. I've been doing software dev videos for about two years now. And who's watched one of my videos before? Yeah! Woo! Decent number of y'all. Good stuff. I don't know why.

So, I'm giving my talk today about streaming and not just in React, but as a whole why it's cool and the benefits of adding streaming to your applications. And I'm doing that, first and foremost, by showing you a normal React app written the way many of us have written it before. I just refresh the page, we see the loading, and we have our signed-in state. We've all hopefully written code like this before. Maybe not, actually. Let's see.

Who's written code like this before? Let's see hands. I can't actually see the hands because I literally just have a black box here, but I trust there's a decent number of them. If you've written code like this, move to React Query.

Anyways, this code works fine. The page loads, it has a JavaScript tag in it, the browser loads the JavaScript tag, the JavaScript parses, renders the component, realizes, oh, I need to go fetch this data. And it then goes and makes the API request, gets the data, sets it a state, and then re-renders with the new updated state. There are a couple problems here, though. Obviously, there are race conditions and whatnot. We're going to ignore those. The main problem is that I have to wait until the page has loaded the HTML, fetch the JavaScript, parse the JavaScript, run the JavaScript, render the component, and then we can finally start the fetch when we could have been doing all of that work during that time before. How, though? Because we need to have this content on the page. We can't just send some HTML and send the rest later. Or can we? There is this wonderful little demo that Guillermo tweeted a while back.

2. Exploring Partial HTML Rendering with Streams

Short description:

We have a stream that sends partial HTML to the browser, which can render it and wait for the rest to complete. This allows for faster perceived loading times. Let's explore this in more detail.

I was going to copy-paste the code, but we've all used screen capture of text before. It doesn't work, and I didn't feel like coding it out.

The important details here are that we have a stream, we write this stream data writable with this waitUntil, but we immediately send a response that has the content type of HTML. We then, in this stream data function, create a writer, immediately write a little bit of HTML, wait for 1.5 seconds, then we fetch this confetti URL and show you the rest after.

Now, what this looks like if I go load this page, it says hello, and then the world comes up. Where things get interesting is if we hop in the network tab and we do the same thing, you might have noticed here, it's going to be hard to see, so I actually copied the code, so we can go take a look in a second. The HTML comes in partially complete. It stops here, and then the rest of the HTML comes in after. Pretty cool. The way that this works is the browser is smart enough to render incomplete HTML, as long as it has a closing tag for the most recent element. You can just send incomplete HTML, the browser can render it, and you can finish the HTML later. That's great if you have your content all in order. We'll get to the out-of-order part in a minute.

3. Challenges with Server-Side Loading

Short description:

The modern server component version of this code waits for three seconds before returning a fake user. This delay can make the website appear slower compared to one that loads JavaScript upfront. Caching the HTML on a CDN can improve perceived navigation speed. Immediate response upon clicking a button is crucial.

That's great if you have your content all in order. We'll get to the out-of-order part in a minute.

So, here I have the modern server component version of this code. I get the user from the database directly. This code is very sophisticated, as we can see here. I wait for three seconds and then return a fake user. When I load this, if I close and reopen that page, I'm sitting here waiting. You see the little loading indicator on top? You have to wait for the whole three seconds before you see anything. What I found as a developer and as a user is that most people will perceive this website as way slower than a website that loads all the JavaScript up front because you can see something immediately. If you go to a webpage that has the HTML cached on a CDN so you can get that part instantaneously, it makes navigation feel faster, even if you're just seeing a loading spinner immediately afterwards. That's why we put up with single-page app quirks and behaviors for as long as we did because we would get that immediate response, wait for the JS to load, have it in our cache, and then everything else is fast enough. But the most important thing is you would click a button and immediately see the response, which if I close and reopen this, I'm not immediately seeing a response. This is taking some serious time.

4. Solving the Server-Side Loading Issue

Short description:

To solve this problem, we can move the code out quickly, make it asynchronous, and wrap it with suspense to show a loading state immediately. The browser may be late in rendering the HTML, resulting in an incomplete tag.

So how do we solve this? Who knows the modern solution to this problem in the audience? Let's see hands. I still can't see hands. How about people shout out, what's the solution to this problem? What's the component I need to add to fix this? Correct. I need to move this part out really quick, which we can do easily.

Function, user, data. Grab that. Just make this async. Then return. Let's see if a cursor can carry me here. Cool, it can. And we swap this over. Error user is not defined. Oh, because it was the old state. Because it's funny. You even get the same issues when you're debugging. Because it took three seconds to leave the error state page to get to the actual working page again. So I just thought my code was wrong, when in reality, the browser was late.

And now all we have to do is wrap this with a suspense. Fallback is loading. Forgot to import it. And now we get that loading state immediately. And if I close and reopen, we see that. But we also see the loading indicator, because the browser hasn't finished sending the HTML yet. If I again open the network tab and refresh this, we hop over here. You'll see this is an incomplete HTML tag. I can prove that by adding a 0. Clear this. Reload. And now you'll see we have this incomplete HTML. It hasn't finished pushing.

5. Streaming HTML in a Different Order

Short description:

Streaming HTML in a different order than it appears in the HTML and DOM has been a challenge for streaming nerds. However, there's some magic JavaScript that makes it work. Let's explore how it breaks by opening a version with additional unhidden components. We can also change the order of these components.

But in just a moment, it's going to push the rest of the HTML. I should probably have made it not 30 seconds. Let's make it 5. Gives me enough time to quickly maneuver my way through DevTools. This is obnoxious to demo, I've learned quickly, because you have to get into DevTools before the rest comes in, and then it resets you to the top so you don't see where you were scrolled. That's weird.

Why is this div hidden? Well, if your HTML isn't in order of fastest to slowest, it turns out streaming isn't particularly useful. Because if, let's say, instead of the sign-in state being below the static content, this was above. We can't just send that part later because it's in the order. HTML is a linear format. We could do some CSS hacks to swap things around. I have unironically seen code like that before that's terrifying. But what if we didn't have to? What if the order things were streamed to the browser didn't have to perfectly match the order that they appeared in the HTML, and then eventually the DOM, the way the user would see it? This has been a challenge that we've been trying to figure out a good, reliable solution for, for us streaming nerds for a while. And I've DIYed some stupid stuff before on Cloudflare. I hope none of you have had to do that. It was not fun.

But there's some magic JavaScript in here that makes this all work. And I could show you how it works, but I think it's more fun to show you how it breaks. So we're going to open up this version of the page that has a couple additional pieces. Unhidden. So now we have these three slow components. Slow component one, then two, then three. Again, very complex code. I pass them a prop that is the timeout for how long they're supposed to be. Also, I should have mentioned this before. All this code is server code. None of the JavaScript you're seeing here gets shipped to the client. If I had to use client on top, it would just break entirely because they're recent components. But here we have the one, then the two, then the three. And I can also change the order of these.

6. Exploring HTML Streaming and the Magic Script

Short description:

I could do three, two, one. Let's break it. In the hidden version, we see the slow components. We explore the DevTools and the importance of a script tag. This magic function swaps out elements and performs code changes. The page reloads to showcase what is happening.

I could do three, two, one. And they'll pop in bottom, middle, top. So clearly this isn't coming through linearly. Let's break it. I could have written this code live, but it looks like this, so I'm not going to. We're going to hop to the hidden version quick. And now we see one, then two, then dollar sign slow component three.

Let's take a look at the DevTools. So if we hop in here, I refresh again. Clear and refresh once more. You'll see the HTML is coming in here. And we have all my divs, loading states. But we also have this important piece here, this script tag. I have spent far too much time thinking about this particular script tag and teaching people who work on other things like Elixir the importance of this script tag. Can I tell this that this is JavaScript, even if it doesn't want to believe it? Cool. This is a magic function that gets sent the first time Next.js streams additional HTML to the browser. It grabs an element by ID. It defines this RC function, which takes an element by its ID. It removes the child. So this is removing itself because it goes to the parent then removes it. We find this element B. And we do a bunch of wonderful code to swap these out and change this element for this other element. And then we have right here $rc B colon 0 comma S colon 0.

If I go back to this, I'll change this to be 5, 6. Sure. So it's a bit slower. Hide these guys. And change this to be 0. We'll reload here. And you'll see somewhat what's happening here.

7. Optimizing HTML Streaming with Suspense

Short description:

We wait the six-ish seconds and get the fake div replaced. React puts templates for suspense boundaries in the HTML. When a suspense boundary is resolved, the hidden div with a matching ID is sent. The RC script tag is sent to swap the elements. This optimization can benefit other frameworks like Phoenix LiveView. Once the first chunk of HTML is sent, the rest can be sent without the script tag. This behavior can be enabled in Next.js by installing Next Canary and turning on experimental features.

We wait the six-ish seconds. And we get the fake div being replaced. Because what would have existed in the HTML before, which is actually relatively easy to show, is in this HTML, where that loading state is, we have template ID B0. And that's how the HTML knows where this element needs to be swapped. So when you put in a suspense using server components, React will put in one of these templates for each of those points and identify them in the order that they were created. And then when React finishes resolving that suspense boundary and knows this data is done, it sends that HTML with an ID S colon some number to swap it.

The order that these things come in is we get the HTML that doesn't have anything wrapping it in suspense. Once one of these suspense boundaries is resolved and ready, we send in this hidden div that has the ID to match to this other element, contains the actual HTML within it. And then if we haven't sent this RC script tag yet, we send that immediately afterwards with this function. And then it invocates itself to swap those elements. This allows us to send the HTML in any order and swap it at any point. And this doesn't just benefit React, it doesn't just benefit JavaScript. I've been helping the Phoenix LiveView team in the Elixir world in implementing this over there so they could send their template, they can send their layout immediately, and then send the rest of the data after any database calls or authentication or other things are done, which will make your site feel way faster, especially if you can cache that first part on a CDN and then resolve the rest as you go. And if we hop back in here quick, and I turn these all back on, clear that, where things get really optimized and, in my opinion, really interesting, is once you've sent this first chunk with this first HTML, the rest can just be sent without that script tag, and it just calls like that. So for S2, because that came in next, it sends the HTML, then it sends the script tag with this data, and then it sends, since we're in Next.js, the metadata necessary for Next to keep track of what's in the DOM where, handle hydration, all of those things. And we have the third one come in, and it does the exact same thing. I think this is super cool, and the patterns that it enables are phenomenal. I'm going to take a risk and go a step further, though, because I have more time left than I thought I would at this point. So we're going to do something really dumb. We're going to install Next Canary. Not that I'm not already on a canary, but we're not on the latest. And we're going to turn on experimental, and look at that. What we've just done is made it so this is now a default behavior, which allows for that wonderful caching behavior I was describing before.

8. Introduction to Dynamic I.O.

Short description:

If we build this project with dynamic I.O., we get different output assets. The first byte must be resolved by the same server that sends the rest. Sending data ahead of time allows for faster loading of script tags, fonts, and CSS. However, if you're still blocked on a server, it needs to spin up and process the request before sending the rest.

So if we hop back, we have the page home. This should just work, hopefully, if all goes well. Awesome. What I just turned on is a really cool new feature called dynamic I.O. Previously, you would have had to tell React, or Next.js, or whatever set of tools you're using, where these boundaries exist by using a suspense tag. But now, I can delete the async here, because I don't really need that anymore. And we have the suspense. I can leave that in.

Now, if we were to build this project, we get very different output assets, because the biggest issue with streaming is that the first byte has to be resolved by the same server that sends the rest. So if I'm going to a web page, and it's being served with server components, I can't get something from a CDN immediately. I need to have a connection with that server so it can send down everything else. This is nice because you have that one source of truth that provides all of the content for the page. It allows you to send some data ahead of time. Even if your site is entirely dynamic, you can at least start loading the script tags, the fonts, the CSS, and all of those things earlier. But you're still blocked on a server. If you're using Lambda, something like that, it has to spin up, process the request, figure out what response to start sending, start generating the rest, and it sends it all down. But if I bun run build after turning on dynamic I-O, we get a nice little error.

9. Exploring Partial Pre-rendering

Short description:

This page is partially pre-rendered, allowing for faster loading of fonts, CSS, and other assets. The main flag used to identify what can be rendered this way is 'async'. By adding a suspense tag, everything needed to render the static part is ready to go. React has worked around the limitations of the protocol to improve the streaming experience for users.

Oh, because I still have a bunch of those force dynamic tags, which are no longer needed, which is a really cool change. Hopefully, that's enough. Or do I have one in hidden as well? I do. And, of course, things are no longer being used. Fifth time's the charm. There we are.

What we have done here that is super cool, you'll see that half and half circle. This page is partially pre-rendered. So you have pre-rendering, which is generating HTML based on what props you expect a page to have. You have dynamic rendering, which is the whole page is rendered dynamically on the fly. But now we have partial pre-rendering, where we take everything up until the first suspense tag, statically render that, and store it in a CDN. And now when the user goes to the page, that part can load immediately. They can start seeing something. It could be a loading state. It could be a blank white page. But at least they're already loading the fonts, the CSS, and all those other things, as your server spins up and starts generating the rest.

How does that work, though? There must be some crazy compiler hacks, right? I have a video coming out soon that I'm really proud of about the magic powering this feature and how it enables all of these things. But the TLDR is the main flag that they're using to identify what can or can't be rendered this way. It's async. We're already painting the code. We're already telling whatever tools we're using what is or isn't async, which means, theoretically, that there's some work that has to be done there. So now we know where this work starts, because an async function was called. And we even gave them an extra hint. We added a suspense tag. Now everything that's needed to render that static part is ready to go. We can render it, save it in a CDN. And then going forward, everything can be rendered on the server. It just eats that initial HTML and then streams down the rest with the magic little hot swapping we saw with that RC function.

I hope this helps clarify both why the streaming stuff is so important, and how it actually works, and how React has worked around the limitations of the protocol. Because it turns out streaming HTML in order doesn't really get you a great experience for your users.

10. Enhancing Web Browsing with Suspense

Short description:

Taking advantage of the suspense pattern makes browsing the web a better experience. It prevents layout shift and allows for easier handling of slow components by wrapping them with suspense. This feature is particularly beneficial for cases like streaming SVG backgrounds and inline SVGs, providing a seamless and optimized experience. Debugging incomplete HTML in the Source tab of Chrome is unnecessary and not recommended, but if you encounter bugs related to HTML streaming in React, feel free to reach out.

We'll have a little bit of JavaScript moving a few elements around here and there. All of a sudden, this pattern becomes a better way to browse the web.

Thank you all. This seems like a killer feature for layout shift core metric. Do you think that to be true? Absolutely. To be clear, previously, when things were streaming top to bottom, it would cause layout shift for sure. But now, it's relatively easy to take something that's slow, wrap it with a suspense, give it a fallback tag that is the same size as the expected output element, and it just kind of works. You don't have to think about it anymore, which is really, really cool. And that doesn't just happen for weird example.

In our product upload thing, which is just better file uploads for full stack devs, we have little backgrounds for all the apps that you've created. And those backgrounds are SVGs, many of which are large. I have, I think, 800 of them in one file. It's like a megabyte and a half of TypeScript. So when you load the page that has that data in it, we have the container on the outside, the little loading spinner for the state in the page there. And we can stream down all of the apps that you have in your account with the SVG inlined in that as well. So we don't have to send you this thing with an image tag that has to go fetch an SVG and then load that in. Maybe it's the wrong size and all that. Instead, we just send that down as part of the streamed HTML. No pop-in, no weird behaviors. It all just kind of works. And I've been blown away with the new suspense patterns and how it feels unnatural.

Nice. As a newbie to SSR and Next, how do I debug these incomplete HTML in the Source tab of Chrome? You don't. Almost every Next.js dev that I know is using these features doesn't know about how it actually works under the hood. I only somewhat recently learned, because I was helping my friends from the Elixir world reverse engineer to copy it for what they were doing. You shouldn't ever need to look at the Network tab like that. There's a reason it was so hard for me to do it. It doesn't matter. If you do find a bug related to the order in which HTML is being streamed through React, please DM me. I'm actually really curious.

11. Challenges with CDN and Rendering

Short description:

Sending content to the edge through a CDN while rendering near the database is challenging. Resolving static and dynamic parts near and away from the user requires careful traffic management. DIY approaches are possible but not recommended due to difficulty and complexity. Vercel and Netlify are leading the way in providing easier solutions for implementing these rendering patterns.

But it seems like a pretty foolproof model.

Next one. What's the story for the CDN not being Vercel? Great question. Turns out it's really hard to send content to somebody on the edge through something like a CDN while also rendering the same, like the rest of the content, near your database. So if my user's here, my data's here, ideally my server is right near the data, because there will be multiple back and forth. But the CDN should be a lot closer to where the user is. So now we need to have a way to resolve the static parts near the user and resolve the dynamic parts away from the user and also intermingle the traffic, because the server isn't going to know to only send the streamed parts. It's going to want to send the whole thing. You can DIY this on a platform like Cloudflare or Fastly by putting a JavaScript function there that will send the initial chunk from the CDN as soon as you make a request, trigger your lambda or whatever else to start generating the rest. Eat everything up until that first hidden div, and then send the rest down that way. I've worked on DIYing that once before. It's not fun. I am excited for a future where it's easier to implement these things. But it turns out making good, powerful rendering patterns on the infrastructure side is hard. And there's a reason that Vercel is the thing that we all end up using. I know Netlify is working on a lot of these things too. It's fun to play with, but DIYing this isn't fun at the moment.

12. Using Suspense in Client-Side Rendering

Short description:

Using suspense for promise objects in client-side rendering is an interesting pattern. It allows for passing promises from the server to the client and streaming the result to the client. This can help improve the first time loading of a page by reducing dependency on JavaScript loading. However, understanding the component hierarchy and data fetching in Next.js remains a challenge.

All right, next one. What do you think about using suspense to handle promise objects in client-side rendering? I'll be honest. Beyond things like React 3.0 Fiber, I have not used much suspense on the client-side. I probably had three or four suspense wrappers in my apps before server components, and all of them were for lazy-loading giant components with a ton of JavaScript in them. It's a cool pattern. I've not gotten the use hook in React to behave ever. It's not something I've explored as much as I would have liked to yet, but it seems fine.

Something else that I actually do think is really cool, I should have focused on the demo. Think of those for the future. You don't just have the ability to pass JSX down from the server. It's not just like, oh, I render some HTML on a suspense and it comes down later. You can also take a promise that is on the server, like the user promise that I had, and pass the promise itself to the client via a client component, just as a prop. If I have my getUserPromise on the server, I pass that to a client component. I can wrap that promise in use, and now I have the ability to get the result of the promise streamed to the client instead of just streaming the JSX instead, which is a really cool way to do things on the client while still fetching the data from the server and passing it down.

Will this help LCP or just perceived site speed? This will definitely help LCP. It's not a massive help, but the time it helps the most is the first time you've loaded the page, because you're no longer blocked on the JavaScript loading and then fetching the rest of the content. You can think of it as, like, previously we had the page load. It would trigger the HTML load. It would trigger the JavaScript loading, the CSS, and all of that. It would do all of its parsing, and then it would start fetching. We've taken this part and just moved it here, because the fetch can start before the browser's finished parsing all of those things. So the amount of time it takes to get to that end state is better, but this used to come at the cost of your first Contentful Paint, because you would have to wait for the server to spin up and start sending a response. Now we could send a response immediately and start generating the rest of the page all just shifted. We went from like this to this for our timelines, which allows for everything to be faster. Nice. Let's see. Let's find a good one here. All right, let's check this out. Is there an easy way to look at the component hierarchy and understand how and when Next.js fetches data and renders components? No. This has been a problem I've complained about for a while.

13. Challenges with Next.js React Model

Short description:

The pushback on the new Next.js React model is partly due to the lag in development tools. However, the improved capabilities simplify code and enable more complex implementations. Despite challenges in understanding data fetching and hierarchy, efforts are underway to enhance dev tools. Comparatively, the dev tools in the NUXT and Vue worlds are enviable.

I honestly think a lot of the pushback with the new Next.js React model comes from the dev tools not catching up as fast as the capabilities have. And when you get capabilities like this, it makes your code way simpler, but as soon as you make tools that make things simpler, people can now make increasingly complex things, because the base primitives are better.

As a result, it was easy to get into a place where you didn't actually know what was being fetched, what wasn't, what data existed where, the hierarchy of all of that in your app, and we no longer could just debug with dev tools in the browser like we're used to because your dev tools in the browser don't tell you what the server is doing. The work is being done to fix a lot of this. I know the Next. team is finally really committed to solving the dev tools problem, but it's not where I would like it to be right now. And when I look at things like the NUXT and Vue world, I do feel a bit of envy about the quality of their dev tools.

14. Interaction with Cached Pages and SEO

Short description:

The interaction with heavily cached pages depends on where you put the cache. Putting the cache in front of the HTML response is inflexible and limits the page state for all users. With the new pattern, the initial response is faster, generic, and allows streaming of the rest. Caching can now be done at a more granular level. SEO has generally improved with faster responses and better time to LCP. If server rendering is the most time-consuming task, streaming components can be server rendered on each request, reducing load on the host server.

Yeah. OK. How does this interact with heavily cached pages? There's a lot of different places you can put caching. And it's always hard when people ask me about it, because I don't know if you're talking about a back-end service that's fetching data from a database that you're putting a cache in front of the responses to, are you trying to cache the functions that you're calling that get data in your back-end for front-end React server side application, or are you trying to cache the HTML that's on the front that users are getting on that end?

It depends on where you're putting the cache. I find that the pattern of putting the cache in front of the HTML response is very inflexible. And it requires the whole page to be static enough that every user can get the same HTML. That, for a lot of reasons, kind of sucks. It means you can only have one page state that every user gets. With this new pattern, technically, we're not cached in the traditional CDN sense, but we can get that first response way faster. That response will be generic, but it'll just be a loading spinner, whatever the state of the page is, and the rest can start being streamed down, and which parts of that you cache are up to you.

I find that we, as web devs, have been forced to think of caching as the headers we put on our HTML. Now we can think of caching on a much more granular level. Or not. We have the ability to choose. The same way React lets us pick MVC or no, caching is now broken up similarly. We shouldn't have to think of the cache as the thing that happens after the HTML loads. All right, next one. Does this affect SEO? I mean, these crawlers, I mean, are these crawlers patient enough to wait for the end state? Almost all of them are, yeah. And if they're not, they have enough data to do things. I found that the SEO on our apps, as we started doing this, has, for the most part, improved beyond some weird bugs with some early stuff that we were working on. The benefit is that you can send a response significantly faster and the time to, like, LCP is better too. If you're using this to stream in something that takes 20 seconds, I'm not sure how much Google's going to like that. But if you have a page that takes 20 seconds to load the content, SEO is the least of your problems. True. Very true. Last one here. Is there a way to get this benefit but reduce load on the host server? Sounds like streamed components need to be server rendered on each request. Yeah, and that's awesome. If the thing that takes the most time on your server right now is server rendering, you're either working on a side project or you've optimized the hell out of your data pipeline. Either way, awesome.

15. Rendering on the Server and Server Runtime

Short description:

Rendering on the server is not a big penalty compared to the multiple API requests a client has to make. Making multiple requests in a waterfall pattern will always cost more in terms of server runtime. Thank you all.

But realistically speaking, having to render on the server is not a big penalty when you think about all the things that a server has to do. Before, we had a pattern like this, where you could get all this data back in one request. You'd have to have the client do multiple API requests. I have never seen a world in which a client is requesting data that has an auth wall where they're not just fetching it 15-plus times, which means every single one of those fetch requests has to be authed again. And if those 15 auth calls are somehow magically cheaper than a single render, I'd love to see the code. It's just not reality.

The reality that I live in as a dev working in apps of all sizes is that when you aren't able to send everything through one response, you now need to make multiple requests in that waterfall of requests will always cost more in terms of server runtime than if you just sent the right thing to the user in the first place.

Amazing. Everybody, give it up for Theo Brown. Thank you all.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Building Better Websites with Remix
React Summit Remote Edition 2021React Summit Remote Edition 2021
33 min
Building Better Websites with Remix
Top Content
Remix is a web framework built on React Router that focuses on web fundamentals, accessibility, performance, and flexibility. It delivers real HTML and SEO benefits, and allows for automatic updating of meta tags and styles. It provides features like login functionality, session management, and error handling. Remix is a server-rendered framework that can enhance sites with JavaScript but doesn't require it for basic functionality. It aims to create quality HTML-driven documents and is flexible for use with different web technologies and stacks.
React Compiler - Understanding Idiomatic React (React Forget)
React Advanced 2023React Advanced 2023
33 min
React Compiler - Understanding Idiomatic React (React Forget)
Top Content
Watch video: React Compiler - Understanding Idiomatic React (React Forget)
Joe Savona
Mofei Zhang
2 authors
The Talk discusses React Forget, a compiler built at Meta that aims to optimize client-side React development. It explores the use of memoization to improve performance and the vision of Forget to automatically determine dependencies at build time. Forget is named with an F-word pun and has the potential to optimize server builds and enable dead code elimination. The team plans to make Forget open-source and is focused on ensuring its quality before release.
Using useEffect Effectively
React Advanced 2022React Advanced 2022
30 min
Using useEffect Effectively
Top Content
Today's Talk explores the use of the useEffect hook in React development, covering topics such as fetching data, handling race conditions and cleanup, and optimizing performance. It also discusses the correct use of useEffect in React 18, the distinction between Activity Effects and Action Effects, and the potential misuse of useEffect. The Talk highlights the benefits of using useQuery or SWR for data fetching, the problems with using useEffect for initializing global singletons, and the use of state machines for handling effects. The speaker also recommends exploring the beta React docs and using tools like the stately.ai editor for visualizing state machines.
Routing in React 18 and Beyond
React Summit 2022React Summit 2022
20 min
Routing in React 18 and Beyond
Top Content
Routing in React 18 brings a native app-like user experience and allows applications to transition between different environments. React Router and Next.js have different approaches to routing, with React Router using component-based routing and Next.js using file system-based routing. React server components provide the primitives to address the disadvantages of multipage applications while maintaining the same user experience. Improving navigation and routing in React involves including loading UI, pre-rendering parts of the screen, and using server components for more performant experiences. Next.js and Remix are moving towards a converging solution by combining component-based routing with file system routing.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Next.js for React.js Developers
React Day Berlin 2023React Day Berlin 2023
157 min
Next.js for React.js Developers
Top Content
Featured WorkshopFree
Adrian Hajdin
Adrian Hajdin
In this advanced Next.js workshop, we will delve into key concepts and techniques that empower React.js developers to harness the full potential of Next.js. We will explore advanced topics and hands-on practices, equipping you with the skills needed to build high-performance web applications and make informed architectural decisions.
By the end of this workshop, you will be able to:1. Understand the benefits of React Server Components and their role in building interactive, server-rendered React applications.2. Differentiate between Edge and Node.js runtime in Next.js and know when to use each based on your project's requirements.3. Explore advanced Server-Side Rendering (SSR) techniques, including streaming, parallel vs. sequential fetching, and data synchronization.4. Implement caching strategies for enhanced performance and reduced server load in Next.js applications.5. Utilize React Actions to handle complex server mutation.6. Optimize your Next.js applications for SEO, social sharing, and overall performance to improve discoverability and user engagement.
Concurrent Rendering Adventures in React 18
React Advanced 2021React Advanced 2021
132 min
Concurrent Rendering Adventures in React 18
Top Content
Featured Workshop
Maurice de Beijer
Maurice de Beijer
With the release of React 18 we finally get the long awaited concurrent rendering. But how is that going to affect your application? What are the benefits of concurrent rendering in React? What do you need to do to switch to concurrent rendering when you upgrade to React 18? And what if you don’t want or can’t use concurrent rendering yet?

There are some behavior changes you need to be aware of! In this workshop we will cover all of those subjects and more.

Join me with your laptop in this interactive workshop. You will see how easy it is to switch to concurrent rendering in your React application. You will learn all about concurrent rendering, SuspenseList, the startTransition API and more.
React Hooks Tips Only the Pros Know
React Summit Remote Edition 2021React Summit Remote Edition 2021
177 min
React Hooks Tips Only the Pros Know
Top Content
Featured Workshop
Maurice de Beijer
Maurice de Beijer
The addition of the hooks API to React was quite a major change. Before hooks most components had to be class based. Now, with hooks, these are often much simpler functional components. Hooks can be really simple to use. Almost deceptively simple. Because there are still plenty of ways you can mess up with hooks. And it often turns out there are many ways where you can improve your components a better understanding of how each React hook can be used.You will learn all about the pros and cons of the various hooks. You will learn when to use useState() versus useReducer(). We will look at using useContext() efficiently. You will see when to use useLayoutEffect() and when useEffect() is better.
Introducing FlashList: Let's build a performant React Native list all together
React Advanced 2022React Advanced 2022
81 min
Introducing FlashList: Let's build a performant React Native list all together
Top Content
Featured Workshop
David Cortés Fulla
Marek Fořt
Talha Naqvi
3 authors
In this workshop you’ll learn why we created FlashList at Shopify and how you can use it in your code today. We will show you how to take a list that is not performant in FlatList and make it performant using FlashList with minimum effort. We will use tools like Flipper, our own benchmarking code, and teach you how the FlashList API can cover more complex use cases and still keep a top-notch performance.You will know:- Quick presentation about what FlashList, why we built, etc.- Migrating from FlatList to FlashList- Teaching how to write a performant list- Utilizing the tools provided by FlashList library (mainly the useBenchmark hook)- Using the Flipper plugins (flame graph, our lists profiler, UI & JS FPS profiler, etc.)- Optimizing performance of FlashList by using more advanced props like `getType`- 5-6 sample tasks where we’ll uncover and fix issues together- Q&A with Shopify team
React, TypeScript, and TDD
React Advanced 2021React Advanced 2021
174 min
React, TypeScript, and TDD
Top Content
Featured Workshop
Paul Everitt
Paul Everitt
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.