HTTP/3 Performance for JS Developers

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Spanish audio is available in the player settings
Bookmark
Rate this content

HTTP/3 is the hot new networking protocol, available today! While you get most of its benefits out-of-the-box, there are also quite some things you can and should do to get maximise performance.


In this talk, we look at optimizing JS payloads for the first network round trips, how browsers prioritize JS against other resource types, SPA vs MPA nuances, and how to optimally use the 0-RTT and 103 Early Hints features. We also look at the protocol's integration with fetch() and talk about the upcoming WebTransport!

This talk has been presented at JSNation 2023, check out the latest edition of this JavaScript Conference.

FAQ

HTTP 3, also known as H3, is the latest version of the Hypertext Transfer Protocol that significantly differs from HTTP 2 by replacing TCP with the QUIC transport protocol, which runs on top of UDP. This change introduces many performance-related features, such as faster web page loading due to improved efficiency.

The main benefits of using HTTP 3 include enhanced performance features like quicker web page loading times, the introduction of zero RTT (Round Trip Time) which speeds up connection setups, and the use of a single network connection to manage multiple resources efficiently.

No significant changes are needed on your web pages to benefit from HTTP 3 if they are already optimized for HTTP 2. Enabling HTTP 3 is typically straightforward, often requiring just a simple configuration change or 'flip of a switch' to activate these new features.

In HTTP 3, resource prioritization is handled through an HTTP request header named 'priority'. This header allows browsers to indicate the urgency and the incremental delivery of resources over a single network connection. However, different browsers might assign different priorities to the same resources, which can affect the loading order.

The 103 Early Hints feature in HTTP 3 is used to accelerate the loading process by allowing servers to send preliminary resource links (like preloads and preconnects) in HTTP response headers before the full HTML is ready. This helps the browser to start fetching critical resources sooner, improving page load times.

Web Transport is a feature introduced with HTTP 3 that offers capabilities similar to a raw network socket in the browser. It allows for more powerful real-time data handling scenarios, such as gaming and media streaming, by utilizing the low-level features of QUIC and HTTP 3 in a relatively easy-to-use manner.

Control over specific HTTP 3 features such as zero RTT is limited, as these are managed by the browser and server based on complex internal heuristics. This means that the use of such features is automatically decided without explicit settings or parameters available to developers.

HTTP 3 manages resource loading such as JavaScript and fonts differently based on the browser's internal priority settings. For instance, Chrome may assign higher priority to fonts and certain JavaScript files, whereas Firefox might not prioritize custom fonts as highly. Developers have some control over these priorities using features like the fetch priority.

Robin Marx
Robin Marx
21 min
05 Jun, 2023

Comments

Sign in or register to post your comment.
Video Summary and Transcription
HTTP 3, also known as H3, is the latest version of the HTTP protocol with new performance-related features. Enabling HTTP 3 requires minimal effort and provides significant benefits, but limits fine-grained control over performance features. Zero RTT has limitations due to security reasons and restrictions on allowed requests. Resource loading and prioritization in HTTP 3 have some problems, as browsers may not agree on resource importance. Fetch priority allows fine-grained control over resource loading order, and resource discovery can be improved with 103 Early Hints. Web transport provides low-level access to QUIC and HTTP3 features for real-time use cases.

1. Introduction to HTTP 3

Short description:

Hello there, I'm Robin Marks and I work at Akamai. Today, I'd like to talk about HTTP 3, the latest version of the HTTP protocol. HTTP 3, also known as H3, has many new performance-related features that make it more efficient than H2 and TCP. Enabling HTTP 3 requires minimal effort and provides significant benefits. However, it also limits fine-grained control over performance features, such as the JavaScript Fetch API and the zero RTT feature.

Hello there, I'm Robin Marks and I work at Akamai. And I'd like to talk to you today a bit about HTTP 3, which is the latest and greatest version of the HTTP protocol.

And as you can see, it's really quite a bit different from HTTP 2. One of the big changes that happened was that we no longer use TCP underneath. But we've moved to a new transport protocol called QUIC instead. QUIC itself, which runs on top of UDP. You don't really need to know all of these details.

The main thing you need to know for today is that QUIC and H3 have a lot of new performance-related features on board, things that make it quite a bit more efficient than H2 and TCP, as such as help your web pages load quite a bit faster. Now, you might think, oh, this looks interesting, but it's probably going to be a lot of work for me, right, to start using all of these new features. Well, that's where the good news comes in. That's not true. Basically, if you just enable HTTP 3, which is often just the flip of a switch, you get all of these features out of the box. In fact, you don't even have to change anything about your web pages to make optimal use, as long as you're already tuned for HTTP 2, which, to be honest, after eight years, you really should be. This should work just fine on HTTP 3 as well. So, this is quite good. You can get all of the benefits with not a lot of work.

There is also a downside to this, however, because it means you don't get a lot of fine-gained control over these cool new performance features. A good example of this is the JavaScript Fetch API, of which you probably all know you really only have access to the top option. There is no way to say explicitly that you would like to use HTTP3 instead of HTTP2 for a call. This is something the browser itself decides based on some rather complex internal heuristics. So you can't pass like a protocol parameter or there's no fetch HTTP3. And thank God the last line here is not something we went for, not a specific HTTPS3 URL. That would have been absolutely crazy. Another example of this is the zero RTT feature. This is one of the core new performance features in H3 in that it makes the connection setup to be a bit shorter. So for H2 on TCP, this typically takes three individual round trips on the network before you start getting HTTP data back. Quick in H3 then lessen this to just two round trips because Quick can combine the transport and the cryptographic TLS handshake into one round trip. And then there is this magical new feature called zero RTT where you can already make an HTTP 3 request and get some response back in the very first round trip of the connection which is about the fastest we can do. This all sounds very good but again you really don't have a lot of control on whether or not, for example, zero RTT is used in let's say again a fetch call. There is no way to enable, disable this.

2. Zero RTT Limitations

Short description:

The browser and server determine the zero RTT limitations for security reasons. It's often limited on many deployments and has restrictions on the types of requests allowed. The lack of control can lead to the browser sending incorrect requests, resulting in wasted potential. While you get the features for free, it's not without limitations.

This is again something the browser chooses for you. And in this case it's not just the browser, it's also the server that is going to be holding your hand. For some complex security related reasons, zero RTT is often limited quite a bit on a lot of deployments. So for example, the code that I have here says that you can only use get requests without query parameters in a zero RTT request. Which as you might imagine, really lowers the amount of use cases you can use this for. For example, it's kind of useless for most API related requests. And again, the thing, because you don't have a lot of control over this, it might be that the browser gets it wrong. Browser sends a wrong type of request in zero RTT. The server will then reject that and the browser will have to retry it after that first round trip. Kind of wasting the potential of zero RTT. So yes, you get all the features for free. But it's not all amazing as it might seem.

3. Resource Loading and Prioritization

Short description:

I want to talk about how we load individual resources on a page and how we prioritize when those should be loaded. HTTP 3 uses a simple mechanism to assign priorities to resources, but there are some problems with this approach. Browsers may not agree on resource importance, and their prioritization may differ from what developers expect.

Now, of course, I don't want to just talk about things you can't do with H3. I want to talk about things you can do. So the browser kind of shields you from these internal details. The protocol is kind of a black box. But it does give you these somewhat higher level features that you can use to tune some of these behaviours. And I'd like to talk about some of these with you today.

The first one is going to be about how we load individual resources on a page and how we prioritize when those should be loaded. As you probably know, HTTP 2 and also H3 use only a single network connection to load all the resources. This means we need to somehow decide in which order the resources are actually loaded on that one connection. You might think of a very naive solution which is going to load them in the order that they appear in the HTML but that would be rather inflexible in practice. And you can also see that there are many different options in which you might decide to send these resources. It doesn't just have to be back-to-back sequential.

So what happens in practice? Who decides how this gets done? Well, this is of course the browsers. For every request that the browser makes it is going to assign what is called a priority. An indication of the importance of the resource when it should be loaded. At least in HTTP 3 this is a very simple mechanism. It's just an HTTP request header with a very predictable name – priority. You can even see this in the browser dev tools. A priority has just two parameters. An urgency parameter which is kind of just a number indicating the importance and then incremental saying if this resource can be mixed with data from other resources or not. Now the details are not very important for us today.

What is more important and interesting is that there are a few problems with this approach. First of all, the browsers don't necessarily agree on which resources should be most important. And secondly, even if they do agree, they might come up with a different solution than what you as a developer might intuitively expect. So let's look at some examples for this. So I looked at how the browsers did this prioritization a couple months ago. And here you can see the top line. All the browsers kind of agree that HTML is indeed a very high priority, kind of important for loading a web page. That's logical. Same for CSS, the bottom line here.

4. Browser Priorities for Fonts and JavaScript

Short description:

The browsers have different priorities for fonts and JavaScript files. Firefox assigns medium priority to non-parser blocking JavaScript, while Chrome gives it a lower network priority. Safari assigns high priority to all JavaScript, except for async tagged files. Preloading JavaScript in Chrome always assigns it a high priority, which can be too high in some cases. A new feature called fetch priority allows fine-grained control over resource priority.

You can see that the browsers agree it's quite important. Though it's a little bit more important in Chrome than the other two. But things are very different for fonts. For example, Firefox really does not care about your custom fonts marketing department, which is kind of the opposite of what Chrome is doing, which assigns the highest possible priority to the fonts. So there are already big differences here.

Let's look at what they do for JavaScript, because JavaScript has a lot of different ways of loading JavaScript files. Let's look at Chrome and Firefox first. If you have a JavaScript in the head, which is parser blocking, so sync loaded, that's of course going to be a high priority in both. But then things differ. Then Firefox says for any other type of JavaScript, I don't really care, I'm just going to assign them all the medium priority. Chrome is a bit more fine-grained, it says, you know, if you tag these as async or defer, make them non-parser blocking, this is actually going to make them lower priority on the network as well. Now I could be wrong, but I guess that what you as a developer expect is probably what Chrome is doing, right? Async and defer are clear signals that these resources are a little bit less important than parser blocking JavaScript. So that makes some sense, but now hold on to your seats because let's look at what Safari does with this.

So in case it is not clear, Safari basically says, I don't really care where the JavaScript is mentioned in the HTML, I'm just going to give it all a high priority, doesn't matter if it's in the head or the bottom, it's all the same to me. The only exception is if you tag it as async, for some reason that gets printed down to the medium priority, which was kind of a surprise to me because I would have expected that to happen for defer, not for async. And I don't really know the reason why Safari is doing this. And this is already interesting, let's make it a bit worse. Let's introduce the preload option. What happens if you preload a JavaScript file? What you can see here is that for Safari and Firefox, it doesn't really matter much, they keep like their standard priorities for the preloaded resource as well. But for Chrome, it really depends on which JavaScript file you are preloading because it always assigns it a high priority. So let's say for example, if you preload a JavaScript that is tagged as async or defer in Chrome, you're actually considerably bumping its priority too high. Sometimes this might be something that you want. For example, a good use case there is like a cookie consent manager that we need in Europe where you don't want it to be parcel blocking. So you often tag it as async, but you do want it to be downloaded rather quickly to show the cookie pop-up to the users. In that case, this might be what you want, but I can also imagine quite a few other use cases where you don't want the priority to increase if you preloaded JavaScript. So we need a bit more fine-grained control. And luckily, recently a new feature was introduced that allows us to do just that. This is called fetch priority. And with that, you can kind of tweak the priority. You can't literally say one of those five levels but you can say make it a little bit more or a little bit less of what the browser would initially assign.

5. Fetch Priority and Resource Discovery

Short description:

You can use fetch priority to control the loading order of different fetch calls. It's currently only available in Chrome but will be implemented in Safari and Firefox. Resource discovery can be improved by using 103 Early Hints, which allows the browser to discover resources before the HTML is generated.

So for example here if you preloaded defer you can actually say, I just want to preload it. I don't want to increase the priority. Keep the fetch priority low. And this is not just for JavaScript or preloads. You can also use this for example to load images. For example, your largest contentful painter hero image could be made a fetch priority high. And there are many other use cases. I would say read them on the web.dev link below on the slide.

Now one special case that I want to mention is again, the fetch API. Fetch is kind of weird because you can fetch many different kinds of resources. But internally the browser sees this as always the same type, always the type fetch. As you can see again, the browsers really don't agree on how important the fetch calls should be. Now this is kind of annoying because, let's say you have a complex JavaScript app where you have multiple parallel fetches at the same time. Some of those might be clearly of higher importance to your app logic than others. And so you might want to say those need to be loaded first, but you can't do this with apparently normal fetch. Luckily fetch of course also supports fetch priority. It's in the name, right? However, it's not called fetch priority. It's just called priority when you use it. And for example here you can actually lower the priority of some of your fetches. Should be really quite powerful if you do very fine grained resource loading logic. This is currently only in Chrome, but it's being implemented in both Safari and Firefox. So coming to a browser near you very soon.

But the second thing I wanted to talk to you about is resource discovery. Because of course we can only prioritize resources that we have identified as being needed. Now, usually, you know, how does the browser discover stuff? It's in the HTML. So if the HTML is slow to be generated, if we have a high time to first byte, the whole page load is also going to be slowed because the resources are discovered late. Wouldn't it be great then, if we could somehow do something like this, where we could actually have the browser discover the resources much earlier, even before any of the HTML comes in? Sounds magical, right? This is actually a feature that you can use in some setups, even today. This is called 103 Early Hints. Now, the setup I'm using here today, the example is that we are using an edge server, right? CDN something like CloudFlare or Akamai, or if you're very hip, Vercel or Netlify. This means that the browser will connect to the edge server.

6. Edge Server Fetching and Preloading

Short description:

If the edge server doesn't have the HTML in cache, it fetches it from the origin. During this time, the edge server can send back 103 Early Hints, which is a list of links. The browser can then start preloading these files, so by the time the HTML arrives, the browser already has critical files. Preloading and preconnecting to third-party domains is also possible.

And if the edge server doesn't have, let's say, the HTML in cache, it needs to go fetch that from the origin. While that is happening, the edge server can send back to 103 Early Hints. And what this is, is really just a list of links. This is really, again, just preloads and preconnects. Usually you'd put those in the HTML, but of course, we don't have the HTML yet. We're waiting for it to be generated. So here, we just put these in the HTTP response headers directly. What then happens, the browser gets these links and it can then start preloading these files. So that by the time the HTML actually comes in from the origin, the browser should already have large parts of your most critical files. So this is what makes this magic happen, and it gets even better because you can even preload and preconnect to third-party domains as well.

7. Static Domain and Server-Side Rendering

Short description:

You can open a new connection to a static domain and load more resources while waiting for the HTML. This relies on slow HTML generation, which is more common with server-side rendering. I suggest moving back to server-side rendering and dropping client-side rendering.

Let's say, for example, you have a static domain for which you used to host some assets like fonts. While everything else is happening, while you're waiting for the HTML, in parallel, you can already open a new connection to the other domain and load more resources. So this is really quite powerful, but as you can see, it really relies on having a rather slow HTML generation time, which is of course mainly going to happen if you use something like server-side rendering and not so much client-side rendering. So am I advocating that we all just move on mass back to server-side rendering, drop client-side rendering completely? Well, yes, that is exactly what I am suggesting. I am suggesting we all just go back to full PHP, forget all about the JavaScript. That's probably not what you expected to hear in a JavaScript conference, but what can I say? I'm old school.

8. Introduction to Web Transport

Short description:

Web transport is a powerful option for real-time use cases, providing low-level access to QUIC and HTTP3 features. It allows you to choose congestion control algorithms, send unreliable datagrams, and access raw HTTP3 streams. When combined with upcoming web features like WebAssembly and WebCodecs, it enables efficient data processing and rendering. However, WebTransport may fall back to HTTP 2 if HTTP 3 is blocked, and the browser abstracts some details. Overall, HTTP 3 offers high-level features and exciting possibilities, including 103 Early Hints and Web Transport.

Now the last thing I wanted to talk to you about is something called web transport. I've talked a bit about the Fetch API, and that is usually most of what you need, but for some use cases, especially some are live real-time use cases, you need a bit more power. And up until now, you would have to use something like web sockets over TCP or if you really needed UDP or unreliable data, you could use something like the WebRTC data channel for this. Both work, but especially the latter one is kind of difficult to use. It's not very intuitive to really get set up, especially in a client to server context.

With HTTP3 and QUIC, we now get a third option in this list, which is called web transport. And I like to say, even though it's not completely correct, I like to say web transport is the closest we will ever get to a raw network socket in the browser. As you might know, there are no TCP or UDP sockets because of security reasons, but web transport exposes most of the low-level QUIC and HTTP3 features in a relatively easy-to-use way. So for example, web transport is not finished yet. This is the current design, it might still change, but it already gives you an idea of the powers that you might have. For example, you might even be able to choose the congestion control algorithm the browser would use, where you might tune for either high throughput or low latency. Similarly, something you see there is something called datagrams. You can actually send fully unreliable datagrams. These are not raw UDP datagrams. These are actually part of the QUIC connection, so they are fully encrypted and flow- and congestion-controlled, but they should still be very interesting for use cases like real-time gaming and media streaming. And finally, you have access to the raw HTTP3 streams using a very, I think, intuitive interface for anyone who has ever used other types of JavaScript streams.

So web transport is coming. It's not done yet, but you can test this out in Firefox and Chrome at this time. It really only begins to really shine, however, when you combine this with other upcoming and existing web features. For example, a lot of people are using this to reproduce the use case for WebRTC, live media streaming, but in a much more low level way where you get the data in through, for example, web transports. Then you can use something like WebAssembly to very efficiently process the data. Then there is something new called WebCodecs that actually allows you to decode or transcode the media data in a very efficient way directly from JavaScript or WebAssembly. You can then render it. And there are examples within a project called MediaOverQuick. They're working on a new protocol specifically for this that have some really amazing results for very low latency video right inside the browser without all the complexity of WebRTC. So WebTransport is really just a building block for many cool use cases on top.

Of course, there is always a catch. You might've seen this in one of the earlier slides. It's not raw HTTP 3 because some networks will actively block or disallow HTTP 3 in practice. So WebTransport will fall back to HTTP 2 if it's not available. And at least for now, the plan is to give you access to datagrams even on HTTP 2 even though they're not really unreliable. So, you know, something to watch out for again. The browser abstracts some of this sometimes a bit too much. With that, it's time to conclude this. I think it's clear that HTTP 3 is indeed a very powerful protocol. Even though you can't do much of it, there are some high-level features that you can use. Some of them are quite complex like prioritization and depend on the browser, but others should allow for many new, interesting use cases like for example, 103 Early Hints and especially Web Transport.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
How React Compiler Performs on Real Code
React Advanced 2024React Advanced 2024
31 min
How React Compiler Performs on Real Code
Top Content
I'm Nadia, a developer experienced in performance, re-renders, and React. The React team released the React compiler, which eliminates the need for memoization. The compiler optimizes code by automatically memoizing components, props, and hook dependencies. It shows promise in managing changing references and improving performance. Real app testing and synthetic examples have been used to evaluate its effectiveness. The impact on initial load performance is minimal, but further investigation is needed for interactions performance. The React query library simplifies data fetching and caching. The compiler has limitations and may not catch every re-render, especially with external libraries. Enabling the compiler can improve performance but manual memorization is still necessary for optimal results. There are risks of overreliance and messy code, but the compiler can be used file by file or folder by folder with thorough testing. Practice makes incredible cats. Thank you, Nadia!
Optimizing HTML5 Games: 10 Years of Learnings
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Optimizing HTML5 Games: 10 Years of Learnings
Top Content
Watch video: Optimizing HTML5 Games: 10 Years of Learnings
PlayCanvas is an open-source game engine used by game developers worldwide. Optimization is crucial for HTML5 games, focusing on load times and frame rate. Texture and mesh optimization can significantly reduce download sizes. GLTF and GLB formats offer smaller file sizes and faster parsing times. Compressing game resources and using efficient file formats can improve load times. Framerate optimization and resolution scaling are important for better performance. Managing draw calls and using batching techniques can optimize performance. Browser DevTools, such as Chrome and Firefox, are useful for debugging and profiling. Detecting device performance and optimizing based on specific devices can improve game performance. Apple is making progress with WebGPU implementation. HTML5 games can be shipped to the App Store using Cordova.
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
Workshop
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
High-performance Next.js
React Summit 2022React Summit 2022
50 min
High-performance Next.js
Workshop
Michele Riva
Michele Riva
Next.js is a compelling framework that makes many tasks effortless by providing many out-of-the-box solutions. But as soon as our app needs to scale, it is essential to maintain high performance without compromising maintenance and server costs. In this workshop, we will see how to analyze Next.js performances, resources usage, how to scale it, and how to make the right decisions while writing the application architecture.
Maximize App Performance by Optimizing Web Fonts
Vue.js London 2023Vue.js London 2023
49 min
Maximize App Performance by Optimizing Web Fonts
WorkshopFree
Lazar Nikolov
Lazar Nikolov
You've just landed on a web page and you try to click a certain element, but just before you do, an ad loads on top of it and you end up clicking that thing instead.
That…that’s a layout shift. Everyone, developers and users alike, know that layout shifts are bad. And the later they happen, the more disruptive they are to users. In this workshop we're going to look into how web fonts cause layout shifts and explore a few strategies of loading web fonts without causing big layout shifts.
Table of Contents:What’s CLS and how it’s calculated?How fonts can cause CLS?Font loading strategies for minimizing CLSRecap and conclusion