Turning It up to Eleven

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

More often than not, performance starts with how data is loaded in your app. Which data has dependencies? Which data is critical to the page? Which data is variable? Static? Personalized? Available offline? In this talk, we'll use Remix to explore how different data loading strategies can improve your user’s experience.

This talk has been presented at React Advanced 2024, check out the latest edition of this React Conference.

FAQ

Hydrogen is a framework on top of Remix that adds eCommerce-specific features to help quickly build an eCommerce site on Remix.

'Turning It Up to 11' is a metaphor for pushing performance and optimization in web development to the maximum, inspired by a humorous moment from an old movie.

An app is usually slow due to inefficient data loading rather than the framework itself. To improve performance, parallelize data fetching and consider using caching techniques.

A 'waterfall' in data loading refers to sequential requests where each request waits for the previous one to complete, making the process slower than necessary.

To optimize data fetching in Remix, use 'promise.all' to parallelize requests, identify critical data for immediate loading, and utilize caching strategies for static data.

A service worker in a PWA (Progressive Web App) helps manage offline capabilities and caching, allowing the app to function without an internet connection.

Caching improves performance by storing static data that doesn't change frequently, reducing the need to fetch it repeatedly, thus speeding up page load times.

Critical data is essential for the initial page load and determines the page's response status, while deferred data can be loaded later, enhancing performance by prioritizing essential content.

Client-side caching in Remix can be implemented using client loaders and packages like Remix client cache to store and retrieve data efficiently, reducing server requests.

Consider the value and user experience of offline functionality, including how to display offline content, syncing data states, and indicating connectivity status to users.

Bret Little
Bret Little
21 min
28 Oct, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Welcome to Turning It Up to 11 at React Advanced. Hydrogen, an eCommerce framework built on top of Remix, focuses on efficient data loading. Loading data in parallel using promise.all is four times faster than loading in a waterfall manner. Split promise.all into two when handling data dependencies. Caching static data can significantly improve page load time. Optimize product pages by reducing awaits and prioritizing essential content. Use suspense boundaries and skeleton UI for optimal data loading. Place non-blocking requests before primary data request for better performance. Remix client cache handles caching automatically. Consider data dependencies and prioritize critical content.
Available in Español: Turning It up to Eleven

1. Introduction to Turning It Up to 11

Short description:

Welcome to Turning It Up to 11 at React Advanced. I'm Brett Little, a developer at Shopify, working on the Hydrogen team. Hydrogen is an eCommerce framework built on top of Remix. The web development community often focuses on framework debates, but the speed of your app is determined by how you load data. Data fetching in Remix is done through file-based routing, allowing you to make async requests to APIs and render the data.

Welcome to Turning It Up to 11 at React Advanced. My name is Brett Little, and I'm really excited to be here today.

Again, my name is Brett. I live in Maine. I grow flowers, and I work at Shopify on the Hydrogen team. Hydrogen is a framework on top of Remix where we add all sorts of eCommerce-specific opinions that helps you get started in building an eCommerce site really quickly on Remix.

You can follow along at littlebrett on Twitter. But Turning It Up to 11, what do I mean by Turning It Up to 11? There's this old, old movie. You maybe have never heard of it, but here we got these two old rockers that are talking about rocking it out on the stage. Specifically, they talk about how their amps all go to 11, all the numbers go up to 11. But one of the guys asked, well, why put it up to 11? Why not just make 10 the highest number and make it louder instead of having this extra 11 notch on the dial? Which is kind of interesting. It's kind of this funny sort of moment of this sort of play on the scale of numbers. Does it even matter? Well, I kind of feel like the web development community is sometimes like that a little bit. We argue there's all sorts of drama over this framework versus that framework. This is better, that's worse. And everyone gets really upset, and there's all this sort of drama. But I'm here to say your app isn't slow due to your framework. Usually your app is slow by the way that you're loading data.

So when I say loading data, what do I mean? Well, most apps display a lot on the page and getting all the data to display on the page takes time. So in this case, maybe it's an e-commerce app. An e-commerce app might have tons of products. Those products might have reviews on them. They might have availability, price images, and a lot of this data might be coming from different services, sometimes even third-party services. And you got to get all that together, get it on the page so you can render something, and that takes time sometimes. So data fetching in Remix specifically. Let's do a little bit of a review. If you're not familiar with Remix, Remix provides routes, file-based routing. Each file for a route can have an export async function that is a loader. That export async function that is a loader is going to execute to define the data for this page. So inside here you can make any async requests to any API, get some data, return it out of the loader, and then that default export function, we can pull that data inside and we can render something.

2. Advanced Loader Techniques

Short description:

Most apps are more complicated than simply rendering what's inside the app. Loading data in a waterfall manner, one after another, can be four times slower than loading everything in parallel using promise.all. Having more than one await in an async function indicates a waterfall and may need to be reconsidered for parallelization.

Whatever is rendered here is what's going to show up inside the app. But we're turning it up to 11. That's pretty simple. Most apps are a lot more complicated than that. I see on the Hydrogen team hundreds of different apps that are built out that do some wild stuff inside their loaders.

So let's consider what might be a little bit more advanced, what might be happening. So inside our loader, maybe we're going to load some products, then maybe we're going to load some reviews, then maybe we might load recommendations. And after that, we might also load in the logged in user to display something about who's logged in. That's great. This code actually looks really, really nice. It's really simple. It's easy to understand. Load this, then this, then this. Looks awesome.

But actually, there's a serious problem with this code. Specifically, it's called what is a waterfall. A waterfall is where we have to load one thing, and it's not until that finishes loading, then we can proceed to load the next thing. So essentially, we're doing nothing in parallel. We're loading one, then the other, then the other, then the other. So this is four times slower than it needs to be. And the easiest way to solve this is just wrap everything in promise.all.

If you wrap everything in promise.all, now there's one await at the top, and we load everything in parallel, so now the waterfall is flat. There's not any waiting on anything, and once everything's done, we can render the UI. But if I'm frank, this isn't as intuitive as this. Like this whole promise.all thing, it's kind of unfortunate JavaScript, I feel like, the primitives that we have, this just doesn't look as nice to a lot of developers, and a lot of developers, I feel like, would rather see something like this.

And the key here, though, is if you ever have more than one await in an async function, not even just in Remix, just in any async function in JavaScript, if you have more than one await, you have a waterfall. Maybe that waterfall is necessary. Maybe it's But if you have more than one await, something is processing, you're waiting for it to finish, and then something else is going to process after it. So if you ever see more than one await in a single async function, maybe you should consider looking at it, talking through is that necessary, could it be parallelized? But again, we're tearing this up to 11, it can be, it's not always this complicated. It's not always so easy just to parallelize everything in the function.

3. Handling Data Dependencies

Short description:

When there are data dependencies, split the promise.all into two promise.alls. Consider what is static and what is dynamic. The products request is relatively static.

For example, what about when there are data dependencies? So in this case, we need to load the products, and it isn't until after we have the products, then we got to maybe call a totally different API, maybe it's a third party API that hosts the reviews. We load the products and then we call that reviews API saying, give me the reviews. So we can't do it in parallel. First we got to get one thing, and then we got to go get other things. It's the same thing maybe with a logged in user. Maybe there's a third party recommendation service that we get recommendations on who the persona is that's logged in. We can't do that in parallel. We have first have to get some information about the log in user, and then we go out to this other service to get more data. So that's something to consider. What about when there are our data dependencies?

So if there are data dependencies, the easiest thing that we can do is just clearly split up the one promise.all into two promise.alls. When you have two promise.alls, it's a little bit more optimal. Like, yes, we have this data dependency, but we still have this waterfall. There's one await, we await it. Once it finishes, then there's another await that needs to happen. So could we make this better? Well, one thing you can ask is, what is static and what is dynamic? When I say dynamic, I mean every request that hits the server is potentially going to be a different logged in user, and there's different logged in user information that needs to go down. Whereas every request that hits the server is probably going to send the same products down. So the products request is relatively static.

4. Caching Dynamic Requests

Short description:

The logged in user request is dynamic and potentially different for every request. Kent Dodds' Cachified package allows caching of sub requests on the server. By caching static data like products and reviews, the page load time can be significantly improved.

The logged in user request is going to be dynamic and potentially different for every request that hits the loader. So if we consider that, if something is static, it's potentially could be cached. So Kent Dodds has published this great package on Epic Web called Cachified. Cachified allows you to cache sub requests on the server. So because we know products is relatively static, we can say, cache it for a really long period of time. We can even serve a stale version of that cache, and then revalidate it in the background. But when it's cached, what that means is the first person that loads this page, it's going to be relatively slow. The first person that hits it, there's going to be no products that have been loaded prior, so it's going to have to go out and fetch the products, but then it can save it in some sort of cache. Then the second person that hits this page, it's going to hit this, very first thing, load products, it's already going to have it, so it can immediately resolve, and it can be really, really quick. So we still have the waterfall here, but we're making the waterfall a little bit quicker because we're adding some caching. Maybe we can also cache the reviews because we're like, reviews don't change that often. We don't really care if the reviews are super up to date. Let's add caching.

5. Optimizing Product Page

Short description:

We need to optimize the code to reduce the number of awaits and focus on what is essential for the user to see on the product page. The primary content, such as the product image, title, description, price, and availability, is crucial. Other elements like recommendations and reviews are not as important. By updating the code and removing unnecessary awaits, we can improve the performance and ensure the essential information is displayed.

But we still have two awaits. How do we get those two awaits down to one? Because again, there is this waterfall. This is not as performant as it could be. We really want to get this up to 11. How do we do that?

Well, before answering that question, maybe if we take a step back, we look at this page that we're rendering. We've got this product, we've got this description, we've got the reviews up here with these stars. We have these recommendations down below. Maybe this add to cart button gets displayed and it's enabled when there's available product. If there's nothing that's available, it's going to be disabled. What on this page is absolutely essential for the user to see? What is primary that is absolutely important, and what is secondary?

I'd argue for a product page, at least. It might be different for your app, but this is something you should ask yourself if you're using Remix. What is essential on this route? For this E-commerce page, the only essential things on the page are really the image, maybe the title, the description, the price, maybe even product availability. That's not that important. That could come later. Recommendations, that's not that important. The reviews with the stars at the top, that's not that important. Say one of those requests fails to a third party to get the recommendations. If it fails entirely, do you want the whole page to just explode? Or do we still want to render the product information and allow people to buy something? The recommendations is great, but it's not essential to the page. If we consider what's absolutely essential to the page, we can make some tweaks and some optimizations into our code. If we look back at our original code that we've optimized, we added caching, we did all this great stuff, we can remove one of these awaits.

If we instead update this code so we only await the primary content to the page, and the primary content of the page is the product, still cached, everything else, we're no longer awaiting it. If the await is off, if there's no await here, reviews isn't actually the reviews, it's just a promise. It has a .then on it, it has a .cache on it. Same with loggedInUser. We can return these inside defer. We only have one awake. We also can have data dependencies inside here. LoggedInUser, like recommendations, still depends upon the loggedInUser. We can still express that data dependency just with .then. But notice again, there is no await.

6. Optimizing Data Loading

Short description:

Remix serializes promises over the wire and sends the data to the browser. We can use suspense boundaries and skeleton UI to render primary content first and defer rendering secondary content until it resolves. This is especially useful for content below the fold.

Then all of these promises get returned with this defer function at the very end. Remix will serialize the promises over the wire. That essentially just means that the result of these reviews, again, it's not awaited, whenever it comes down, whenever it gets loaded, Remix is going to handle that, and it's going to send, whenever it finishes, all of that data down to the browser.

If we look at the browser code, we use loader data, same sort of code that we had earlier on, but recommendations is a promise. It has a .then on it. We can pass that promise inside a suspense boundary, we can await it, and whenever it resolves on the server, now we're going to render the recommendations. While it's resolving, during that time, we can render some sort of pending state like a skeleton UI. This is really important for building this kind of experience. Now we can say, just render what's primary up front, everything else is wrapped in those suspense await boundaries, and eventually, when they resolve, everything shows up. This is really important also for things that are below the fold. So when I say below the fold, I mean if you're going to scroll down, so maybe there's all sorts of other content that's below this. Why have our loader function waiting for all of that to load when the user doesn't even see it? So only put stuff awaited that is primary to the page.

7. Optimizing Data Loading - Part 2

Short description:

Consider optimizing data loading by placing non-blocking requests before the primary data request. Use client loaders to cache data in the browser and retrieve it from local storage when transitioning between pages.

We're down to one await. So if we look at this code, let's think about it. Is there any other slight little tweaks that we could make this code a little bit more optimal? Initial glance, maybe not. I mean, there's one await, everything else is a promise, it's deferred to the browser. Well, consider this. Why wait to load the logged in user and recommendations? They're non-blocking, like that there's no await on them, but it happens after the product request. If we just swap those and put the logged in user and recommendations before the product request, they can immediately be kicked off. They can immediately start loading. Nothing's blocking. The primary data that we're awaiting is the product data, but eventually everything's going to resolve down in the browser. So this is something to consider also inside your loaders. Just in general, put your whatever you're awaiting towards the end of your async functions. If you have some sort of async operation that's going to produce a promise that can just be kicked off, fire that a little bit earlier. If you fire that earlier, then it gets the process started a little bit faster, and so they're going to load faster. One thing to consider when you write code like this though is if any of these error, say logged in user error, say recommendations errors, you want to make sure because those aren't primary to the page, they're not going to prevent this API client load product from loading. If these error, you don't want to prevent the primary content from the page from loading. So something to consider.

But what else could we do to optimize data loading in our app? Is there anything else that we could do? Well, what if we could also cache data in the client? So client loaders have recently been added to Remix. With a client loader, we can do really cool stuff. So the client loader, the way it works is I'm on a page. I transition to another page. When I transition to the next page, a loader needs to get data to run to get data for that next page. The client loader though executes before, and it runs just in the browser. And with that client loader, we can say, do I want to run the server loader or do I not want to run the server loader? You get a reference to it. You can await it or you can not await it. If you don't want to await it, maybe you're not going to load the server data because you already have the data. Essentially, this allows you to build a middleware inside your application where you can say, write data to local storage. And then when you transition from one page to another, pull data out of local storage. If I have data in local storage, don't worry about calling the server loader. Instead, just return whatever data I have in local storage.

8. Advanced Application Optimization

Short description:

Now we can optimally build our application without hitting the network if we already have data. Remix client cache handles caching automatically, saving you from writing extra code. By adding Remix PWA, you can make your app available offline and manage the service worker's lifecycle. Consider these questions when reviewing or building code: Is there a waterfall? If so, parallelize requests.

So now we can really optimally build our application in a way that we're not even going to hit the network at all if we already have data. Hydrate just means that the client loader run it whenever the page loads. Without it, it's only going to run when you transition from one page to another. But I could manually write a ton of code that would be like writing to local storage, writing to index DB, reading from it, doing all that sort of stuff. Remix client cache is a package that does all that for you. So if you just add this to your route with your client loader, just call that function. And now instead of use loader data, I call use cache loader data. The data is automatically going to be cached for you, and it's going to be hot swapped and refetched without you even worrying anything about it. It's great.

So now we've turned it up to 11. We have added client side caching. We've added server side subrequest caching. We're now deferring some data so that it's lazy loaded in the browser, so only primary data is showing up first. We've done a lot of stuff, but there's still more we could do. What about if there is no internet connection? What about if your app, you want to be available offline? Like to generally build a PWA, a progressive web app, you need to have a service worker. If you've ever written a service worker before, it feels like, I don't know, you need a PhD to get it all right. There's a great package, though, called Remix PWA. Remix PWA makes managing the lifecycle of a service worker really easy. Just like in Remix, you have entry.server.js and entry.client.js. Now there is entry.worker.js, and with entry.worker.js, we can manage all the things that we might want to do inside our PWA. We might want to cache the base document. We might want to cache JavaScript resources or CSS resources or images. You can do all of that with great examples on Remix PWA.

To recap, whenever I am building a new application, or oftentimes I am called in and I'm reviewing somebody's code, or somebody messages us and says, the performance of this app sucks, how can we improve it? These are the questions that I ask myself when I look at some new code, or I'm going to be even thinking about building some new code. First of all, and actually I should preface, this is the same no matter what framework you are using, generally. You have to ask these sorts of questions. I have applied this to Remix, but this is going to apply to really anything out there, Next.js, Vue, Svelte. You have to ask these sorts of questions. First off, is there a waterfall in the application? If there is a waterfall, let's parallelize that. Let's make requests happen in parallel.

9. Data Dependencies and Critical Content

Short description:

Consider data dependencies and prioritize critical content to load first. Determine what can be deferred. Focus on loading the necessary data for the page to respond correctly.

But are there data dependencies? If there are data dependencies, you can't parallelize it. It gets a little bit more complicated. The next question I ask, what is critical to the page? What needs to be loaded first before anything else? And what can be deferred? One way to think about this is what is absolutely necessary to determine if the page can 200 respond? If the data can't load, then it needs to 500 or it needs to 400 error. That means it's critical. The stuff that determines the status codes of the page, that is most important. That needs to be loaded up front.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Everything Beyond State Management in Stores with Pinia
Vue.js London Live 2021Vue.js London Live 2021
34 min
Everything Beyond State Management in Stores with Pinia
Top Content
State management is not limited to complex applications and transitioning to a store offers significant benefits. Pinia is a centralized state management solution compatible with Vue 2 and Vue 3, providing advanced devtools support and extensibility with plugins. The core API of Pinia is similar to Vuex, but with a less verbose version of stores and powerful plugins. Pinia allows for easy state inspection, error handling, and testing. It is recommended to create one file per store for better organization and Pinia offers a more efficient performance compared to V-rex.
Using useEffect Effectively
React Advanced 2022React Advanced 2022
30 min
Using useEffect Effectively
Top Content
Today's Talk explores the use of the useEffect hook in React development, covering topics such as fetching data, handling race conditions and cleanup, and optimizing performance. It also discusses the correct use of useEffect in React 18, the distinction between Activity Effects and Action Effects, and the potential misuse of useEffect. The Talk highlights the benefits of using useQuery or SWR for data fetching, the problems with using useEffect for initializing global singletons, and the use of state machines for handling effects. The speaker also recommends exploring the beta React docs and using tools like the stately.ai editor for visualizing state machines.
React Query: It’s Time to Break up with your "Global State”!
React Summit Remote Edition 2020React Summit Remote Edition 2020
30 min
React Query: It’s Time to Break up with your "Global State”!
Top Content
Global state management and the challenges of placing server state in global state are discussed. React Query is introduced as a solution for handling asynchronous server state. The Talk demonstrates the process of extracting logic into custom hooks and fixing issues with state and fetching logic. Optimistic updates with mutation are showcased, along with the benefits of using React Query for data fetching and mutations. The future of global state management is discussed, along with user feedback on React Query. The Talk concludes with an invitation to explore React Query for server state management.
Jotai Atoms Are Just Functions
React Day Berlin 2022React Day Berlin 2022
22 min
Jotai Atoms Are Just Functions
Top Content
State management in React is a highly discussed topic with many libraries and solutions. Jotai is a new library based on atoms, which represent pieces of state. Atoms in Jotai are used to define state without holding values and can be used for global, semi-global, or local states. Jotai atoms are reusable definitions that are independent from React and can be used without React in an experimental library called Jotajsx.
Announcing Starbeam: Universal Reactivity
JSNation 2022JSNation 2022
27 min
Announcing Starbeam: Universal Reactivity
Starbeam is a library for building reactive user interfaces with JavaScript, similar to Svelte, Vue, and Ember. It provides a data structure and query feature for filtering and sorting. The useStarBeam function ensures JSX reconciliation only occurs when reactive dependencies change. Starbeam tracks every read and write operation to update the component accordingly. It can be used with React and other frameworks, and offers debugging tools and locale integration.
Thinking in React Query
React Summit 2023React Summit 2023
22 min
Thinking in React Query
Top Content
Watch video: Thinking in React Query
React Query is not a data fetching library, but an Asian state manager. It helps keep data up to date and manage agent life cycles efficiently. React Query provides fine-grained subscriptions and allows for adjusting stale time to control data fetching behavior. Defining stale time and managing dependencies are important aspects of working with React Query. Using the URL as a state manager and Zustand for managing filters in React Query can be powerful.

Workshops on related topic

Rethinking Server State with React Query
React Summit 2020React Summit 2020
96 min
Rethinking Server State with React Query
Top Content
Featured Workshop
Tanner Linsley
Tanner Linsley
The distinction between server state and client state in our applications might be a new concept for some, but it is very important to understand when delivering a top-notch user experience. Server state comes with unique problems that often sneak into our applications surprise like:
- Sharing Data across apps- Caching & Persistence- Deduping Requests- Background Updates- Managing “Stale” Data- Pagination & Incremental fetching- Memory & Garbage Collection- Optimistic Updates
Traditional “Global State” managers pretend these challenges don’t exist and this ultimately results in developers building their own on-the-fly attempts to mitigate them.
In this workshop, we will build an application that exposes these issues, allows us to understand them better, and finally turn them from challenges into features using a library designed for managing server-state called React Query.
By the end of the workshop, you will have a better understanding of server state, client state, syncing asynchronous data (mouthful, I know), and React Query.
State Management in React with Context and Hooks
React Summit Remote Edition 2021React Summit Remote Edition 2021
71 min
State Management in React with Context and Hooks
Workshop
Roy Derks
Roy Derks
A lot has changed in the world of state management in React the last few years. Where Redux used to be the main library for this, the introduction of the React Context and Hook APIs has shaken things up. No longer do you need external libraries to handle both component and global state in your applications. In this workshop you'll learn the different approaches to state management in the post-Redux era of React, all based on Hooks! And as a bonus, we'll explore two upcoming state management libraries in the React ecosystem.