Web Performance Meets Human Sciences

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Website performance significantly impacts user experience, often leading to frustration and anger when pages are slow or unresponsive. Beyond psychological effects, poor performance can increase usage costs and modern websites require a nuanced approach to capture continuous user interactions.

This session explores behaviors like rage clicks and cursor thrashing, emphasizing metrics that reflect user emotions and overall satisfaction. We'll discuss the complexities of performance measurement, highlighting the need to look beyond single metrics. Ideal for those interested in understanding the business impact of site performance and the biases affecting user perceptions.

This talk has been presented at React Summit 2025, check out the latest edition of this React Conference.

FAQ

Matheus Albuquerque is a speaker at React Summit, known as Vidacombinator online, works at Medallia, and is a GDE in web performance.

The main theme was exploring the internals of our minds in relation to web performance, focusing on non-hackable metrics and user behavior indicators.

Non-hackable metrics are indicators of actual user behavior, such as time spent on a site, bounce rates, conversions, and revenue, which are harder to manipulate artificially.

It's challenging because there are many synthetic and real user monitoring metrics, and speed metrics are not a single number. They can also be manipulated, making it difficult to find a common denominator.

Political contexts, like the Great Firewall of China, and accessibility issues can significantly influence how users experience web services, potentially filtering out users due to performance constraints.

Cognitive biases, such as confirmation bias and outcome bias, can lead developers to interpret data in ways that confirm pre-existing beliefs or past outcomes, potentially skewing analysis.

The peak-end rule suggests that users judge an experience based on its most intense point (peak) and its ending, which can influence retention and conversion rates on websites.

Frustration can lead to stress, anxiety, and even physical reactions, such as increased heart rates or violent behavior towards computers and devices.

Rage clicks occur when users rapidly click on a non-responsive element, indicating frustration with web performance, and are a key metric in digital experience analytics.

Survivorship bias in web performance refers to focusing on data from users who successfully access and use a service, potentially ignoring those who are unable to due to factors like slow connections.

Matheus Albuquerque
Matheus Albuquerque
29 min
13 Jun, 2025

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Speaker discusses the challenges in defining web speed metrics and the importance of non-hackable user behavior metrics. The impact of survivorship bias on data analysis is highlighted, along with global internet access factors and the correlation between web vitals and business outcomes. User perception of app performance, UX research impact on API design, and navigating data analysis biases are also covered. The talk emphasizes the need to connect technical metrics to business outcomes for app optimization and success.

1. Introduction to Web Speed Metrics

Short description:

Speaker introduces himself as Matheus Albuquerque, discusses the complexity of defining web speed metrics, and the challenges in finding a common denominator among various metrics.

Wow, hello, everyone. It's great to be here in Amsterdam. As Nathaniel said, that's me. I'm Matheus Albuquerque. You can find me everywhere as Vidacombinator. I work at this company called Medallia and I'm also GDE in web performance. And I feel really great to be back at React Summit. I was here a few years ago telling a little story about internals of React. And today I want to get your help to tell a different story about a little bit of the internals of our minds.

So to tell you why I want to tell the story to begin with, I got a little bit of a preamble here. And I would like to start this with this reflection. What does it mean to be fast for you? Or even better, I want you to start figuring in your head. If you had to summarize speed, web speed in one metric, what would you go with? As I mentioned, I want your help telling this story. So this QR code is going to be on the screen for the next few seconds. And I would really love if you could take pictures and start inputting your opinion.

This is going to be not live coding, but a live demo exercise in a sense. So fingers crossed it's going to work. Plus it's super tricky to make some small conversation during 30 seconds. It is a lot of time when you're on stage trying to distract people. I think we are done and we can move forward. You probably struggled throughout the last 30 minutes to summarize things in one metric, because we live in this reality where we just have so much. We have dozens of synthetic metrics. We have dozens of real user monitoring metrics. We have dozens of metrics we have to tailor for specific use cases and whatnot. This was what you said. We talked about latency, about TTI, which waiting, responsiveness, yeah responses are still coming. And one common denominator I see whenever I ask this question across different conferences is it's just super hard to find this one common denominator. And that's because the cliché is true. Speed metrics are not a single number. But even more than that, they're not only a single number, but recently we've been finding ways to hack some of these or hack the way they're calculated, like Core App Vitals and others, because we want to game these metrics.

2. Exploring Non-Hackable User Behavior Metrics

Short description:

The speaker discusses the importance of non-hackable metrics related to actual user behavior, such as bounce rates and revenue, and the challenges of sharing private corporate data. They mention Andy Davis's post on the link between speed and business outcomes, focusing on human bias, cognitive biases, and the impact on data interpretation.

We want to brag that our systems are faster and faster. We want to get better rankings in SEO and whatnot. So not only there's not a single number, but sometimes they can be hacked. So my motivation here is trying to find the things that cannot be so easily hacked, or what I call non-hackable metrics. And these are mostly the indicators of actual user behavior. So the time they spend on off-site, your bounce rates, your conversions, the revenue of your business, and this kind of thing. But unfortunately, as many of you know, this is also the kind of metrics you don't want to bring to slides and states because that's mostly private corporate data. And even lacking this could basically tell competitors how your business is doing and et cetera.

On this search, there's this really good post by Andy Davis that's called Strengthening the Link Between Speed and Business Outcomes. And there's one image that I really like. That is, he says, context influences visitor experience, which influences visitor behavior, which influences business outcomes. So what I want to do with you in the next 20 minutes is break this down. In the first half, I want to talk about human bias and user frustration. And after, I want to talk about how we can start thinking about measuring frustration. So starting with bias. So one definition of bias, and we all have it because we all have brains, is a cognitive bias is a systematic error in thinking that occurs when people process and interpret information in their surroundings influencing their decisions and judgments.

And there is this thing called the cognitive bias indexes, which I don't expect you to be able to read. But the thing is, if I zoom in at any part of this image, you're going to see that there's thousands if not hundreds of biases. So of course, we cannot cover them all. So here, I want to discuss how things impact how we see our data, how users search our apps, and how we as humans perceive time. So how we see our data. There are two bias years that I wanted to talk about. It is confirmation bias and outcome bias. So outcome bias basically happens when we pick data that confirms our past outcomes. And we do this a lot of times when analyzing speed improvements we delivered. And confirmation bias is basically we pick data that confirms our pre-existing beliefs.

3. Understanding Survivorship Bias in Data Analysis

Short description:

The speaker discusses survivorship bias affecting data interpretation using a case study from the YouTube team's PageRate Matters, highlighting how performance metrics can worsen due to overlooked user experiences in different global regions.

So we use a new tool because we believe this is going to make our search faster. And we want to prove by using that, that it's making our search faster. But my favorite one that impacts how we see our data is the survivorship bias. And you've probably seen this image out there. It's a super popular image. People tweet a lot. But I wanted to propose a different view of survivorship bias on the web. And for that, I want to bring this case study by the YouTube team called PageRate Matters.

And it's basically coming from the YouTube team back in 2012 when they shipped a bunch of performance optimizations to the YouTube page via something they called Fedders. And the weirdest thing happened. After they were done with all the improvements and et cetera, they realized that the performance metrics got worse. Not better, but worse. Upon further examination, they figured out that what's happening is there were certain parts of the globe like remote parts of Siberia, Sub-Saharan Africa, Southeast Asia, Central America and whatnot, that they simply took 20 minutes to even load a video.

So they never got to the point where they watched the video. And after they shipped the Fedders project, people living there could finally open videos. And that's why they got worse metrics. Because finally they could see that sample of users. And the reason why I think this is important is because we have a lot of things usually filtering out people out of our apps. We have, for example, the different realities of devices and bandwidth.

4. Analyzing Global Internet Access Factors

Short description:

The speaker discusses internet penetration in different markets, the variability in device performance, bandwidth disparities globally, and the impact of affordability on internet access in various countries.

And by devices, I mean how internet penetrated markets. Like in Nigeria, internet penetration happened post the computer era. So they skipped straight to low-cost smartphones. So it's a totally different reality to some other countries that we're usually looking at.

Also, we live in a world where devices simply take a wide range of time to even parse or JavaScript. This is a study conducted by Edios Mani back in 2017, where he would show from the highest end to the lowest end the difference in seconds to parse one megabyte of JavaScript.

Also in bandwidth, we're going to see a lot of parts in the globe where the main way of accessing internet is through 2G or 3G. And even when we're talking about the same technology, like LTE connections, this can mean many different things across the globe. So we could be ranging from 6 to 45 megabytes per second, depending on where you're talking about the very same technology. And this reflects throughout the whole spectrum, all the way to download speeds, for example. So people will download your website at a very, very different rate.

Another thing that's super important, and that could be filtering people out of your apps, is affordability. So this is research in ITU Data Hub that basically shows the affordability of having internet at home versus the GNI per capita or what each person makes throughout the year. And there will be some countries that it can be 100 times percent what they make in a year. So it's totally not affordable at all. And if you zoom in and grab two samples, like Portugal and Mozambique, two countries that even speak the same language. And the first one, it costs 1.26% of what they make in a year to have internet at their homes. And the second one, it costs almost a half. So this is super different.

5. Impact of Web Initiatives and Accessibility

Short description:

The importance of initiatives like ThinkAdlike's former project, the impact of political context, such as the Great Firewall of China, and the critical role of accessibility in website filtering.

And that's why I think it's really important to have initiatives like this one by ThinkAdlike. It's not online anymore, but it used to be called What Does My Site Cost? And it would basically compare the performance metrics of your website with cost of internet across different countries.

Another thing that also impacts is political context. And by political context, this could mean a lot of things, but one of them is the Great Firewall of China. And the Great Firewall of China is a set of different network level measures to control and change the way the internet is accessed in China. And one of the things they employ is deep packet inspection, which basically analyzes in a non-public way the way that your bottles, for example, could be loaded. CSS bottles, JS bottles, images, and whatnot.

Last but not least, another thing that could also be filtering people out of our websites is accessibility. And I will not dig too much into accessibility, but overall, we need keyboards to operate a screen reader, generally. So anything that can actually impact your keyboard usage will impact a screen reader usage. And that includes delayed inputs or the focus being dropped because more JavaScript is being loaded and causing layout shifts and whatnot.

6. Accessibility, Performance, and User Frustration

Short description:

Impact of delayed inputs, performance bad practices, and lack of specific accessibility metrics. Marco Reze's quote highlights screen reader delays due to performance. Research shows varying frustration thresholds across countries.

And that includes delayed inputs or the focus being dropped because more JavaScript is being loaded and causing layout shifts and whatnot. Plus, a lot of the other performance bad practices that we have that impact regular performance will impact accessibility, like having large DOM sizes with a lot of nesting and thousands and thousands of nodes, or even reflows, which are also another bad practice that we sometimes cause a lot. This also impacts a lot of screen readers. Plus, the lack of specific metrics around accessibility, so it's even harder to measure.

To wrap up accessibility, I love this quote by Marco Reze. He used to work for a lot of years in accessibility and performance and intersecting that in Firefox. And he said, reducing the number of milliseconds it takes for a screen reader to start speaking after a key press from about 100 to 140 to 100 here, or 120 to 100 there. Doesn't matter much on a fast machine. But on a slow machine, that is from about 250 to 250, or 250 down to 200 or 190. So this is, again, part of our users that we could be filtering out because of performance.

All things considered, we live in a world where even patience is a local thing. So there's this very interesting research that is comparing thresholds for frustration across different countries. And you're going to see that... I already see some people laughing. You're going to see that countries that speak the same language and are even bordering each other, like the United States and Canada, will have very, very different thresholds for frustration. And you're going to see countries like Germany also having very different thresholds for frustration.

7. User Perception of App Performance and Time

Short description:

Users judge apps based on peak-end rule and serial position effect. Time perception complexity illustrated through examples like elevators and airports.

So, yeah. Second thing is how users judge our apps. So for this, we have the peak-end rule and the serial position effect. The peak-end rule basically says that people will judge an experience largely based on how they felt at its peak and its end. And the serial position effect tells us that people will recall the first and last items in a series as best and the middle ones as worst. And how this intersects with web, basically the retention rate of your website could be influenced by the best and worst performance on the first and latest pages. And your conversion rate, for example, could depend on the best and worst performance on the pages before the conversion, or in this case, the peak.

Last but not least, how we perceive time. So a little bit of a raise your hands thing. Do you also feel that time is speeding up every year? It seems like Christmas and holidays are just approaching faster and faster. Me too. And we're not alone. And there's actually a reason for that. And it's really connected with our brain losing its capacity year after year to process information at the speed that it reaches it. So it's just being bombarded more and more. So time perception is super tricky. It's so tricky that's the reason why we have mirrors at the elevators. Elevators didn't always have mirrors. They put mirrors in elevators so people wouldn't complain back in the industrial times that elevators were slow and people stopped complaining.

My favorite case on this is actually airports. So there was this thing in the Houston airports more than a decade ago, where basically people were complaining that it took a lot of time to gather suitcases from the luggage belts and they optimized more and more until it got to a point that people were waiting eight minutes and they kept complaining. And then the airport administration realized that people would walk from the gates to the belt in one minute and wait for seven minutes. So the solution is they push the belts away so that people had to walk for six to seven minutes. And guess what? Complaints dropped to near zero. So time perception is so tricky and it's the same thing in computers. Since 68 we've been trying to find those magic numbers to optimize for how we perceive time. And in 68 we had these magic numbers. And then in 93 Jacob Nielsen found another set going from 0.1 second, 1 second and 10 seconds. After 10, we just lost the train of attention. Then years later in Chrome, we got what's called the real model where we had even different thresholds for frustration.

8. UX Research Impact on API Design

Short description:

UX research impacts API design and React scheduling. Frustration effects on users' stress, anxiety, and rage.

And by the way, you're going to have these slides so you're going to get a chance to. But the thing is, this kind of thing coming from UX centric research impacts that. For example, who here has used the long test API in Performance Observer? Who here has profiled an app and saw a bunch of red dots and et cetera? Yeah, more hands. So those red dots flagging tests that take longer than 50 milliseconds. Their 50 is not a magic number here. It's coming from the research around the real model. So this kind of research actually impacts the way the APIs we use work. And it's the same thing for React. We are at a React conference. Check the source of the scheduler and you will see different thresholds as well. Ranging from 250 milliseconds all the way to 10 seconds and basically defining how React is going to schedule different tasks. Even for concurrent React, where we have this five milliseconds magic number to use back to the main thread, it's the same thing. In this case, it's because it's what fits in a single frame into even 120 FPS devices. So it's all about finding the sweet spot for time perception and us humans. Because at the end of the day, we want to avoid frustration.

And because frustration can match a lot of things like stress. For example, they figured out that basically in slow connections, the participants of a research had to concentrate 50% more using websites via slower connection. Frustration can also lead to anxiety when they figure out that using those delayed... They were fakely delayed. But using those slow webpages made a 38% rise in the heart rate, which was comparable to watching a horror movie alone. And frustration can also lead to rage when they figure out that at least once in their lives, 40% of British people had become physically violent toward their computers. Or when the same thing, not about British, but about devices and becoming violent, when they had this research about slow pages and mobile and 4% of people even threw their phones away because they were frustrated with slow websites. So it's obvious frustration is a bad thing and we want to avoid it. But if we want to avoid it, we need to be able to measure it. And we can measure it by asking the users, of course. And Wikipedia, even us, they did a very interesting research around asking users and about how they felt using Wikipedia and et cetera. But overall, it is hard to design like a survey or a pop-over anything in a way that people will choose to answer what you're doing, what you're asking. So you end up requiring some active engagement from users. Plus, you're prone to what's called selection bias and how foreign effects, which are these two. So selection bias basically happens when you fail to achieve proper randomization in sampling.

9. User Behavior Analysis Techniques

Short description:

Understanding effective computing, behavior analysis, rage clicks, scrolling behaviors, and trash cursor for user behavior analysis.

And how foreign effects is my favorite one, which where basically people react differently to the same thing, just because they know they're being observed. Speaking of observing, another thing we can do is what's called effective computing. Until recently, I had no idea there was even a specific area of computer science called effective computing. That is basically about understanding computers and motions and all of that. But it does, and it's all things about attaching hard rate measures or bandwidth analysis and all that. And there are even libraries on GitHub to track the eyes of your users as they are navigating your page.

But let's be honest. We're not going to do this on a regular basis. So another thing we can do is what's called behavior analysis. And there are a lot of what are called DxA solutions out there for this, and you've probably seen some of them. So you don't necessarily have to implement from scratch all I'm going to say here. But one of the things are rage clicks. So rage clicks basically they occur when users rapidly tap or when something is not working. Yeah, it's better shown this side. So we've always been in this place. Sometimes it's because the main thread is blocked or sometimes it's just because the network request is going and there's no visual feedback. So we can track rage clicks.

We can also track another thing called random scrolling, which is basically defined by rapid scrolling often through large chunks of your page. And that indicates that the user is on the hunch for whatever is the next step. And this is also the reason why we try to optimize so much what's called above default content, like either via SSR or something. Anyways, there is a lot of interesting research showing how scrolling and attention are connected and how you can think of optimizing for it. Last thing, trash cursor. So again, there is a lot of web search saying that how much you can read from your users about their emotions by just checking how they move their mouses. And trash cursor is basically defined by covering a high distance traveled in a short amount of time with no clear intentionality. And it basically indicates that the user is impatient, that they have some doubts or some difficulty or even anxiety. The possible reasons could be first of all, yes, performance.

10. Correlating Web Vitals and Business Outcomes

Short description:

Establishing correlations between core web vitals and user behavior metrics for better app performance and connecting technical metrics to business outcomes.

Basically, if they do this, when they're waiting for a video to buffer or when they're waiting for the next route of your SPH to load or something like that, but it can also indicate a high cognitive load, for example, when they're engaged in a writing activity or content writing or filling out a form for paperwork. It all depends on the context of your app.

A few last things. It is super important that we're able to establish correlations. And that includes, for example, all the core web vitals that we're measuring. So it is good that we get good Lighthouse scores, for example, good REM for core web vitals, but it's even better if you can correlate that.

Establish connect with business outcomes. That's why I love websites like this one. It's called Web Performance Optimization Stats, and it's full of cases dating back from decades on how those technical metrics actually impacted users and business metrics. And web.dev also has full of cases like, hey, we optimized this and we reduced bounce rates or we optimized that and increased total sales. This is what matters.

QnA

Optimizing Metrics for App Success

Short description:

Connecting technical metrics to business outcomes is crucial for app optimization and success, emphasizing the importance of correlating business analytics with contextual metrics for a comprehensive app story.

Or even if compliance is such a big thing in GDPR and et cetera, your key metric could be the time until your cookie banner shows up. That's why I love this example from Vercel where they mentioned that for V0 and chatbots in general, the key metric they found out to be was the time to the focus input because it's a chat app. And they're optimizing in this case with partial pre-rendering.

Establish connect with business outcomes. That's why I love websites like this one. It's called Web Performance Optimization Stats, and it's full of cases dating back from decades on how those technical metrics actually impacted users and business metrics. And web.dev also has full of cases like, hey, we optimized this and we reduced bounce rates or we optimized that and increased total sales. This is what matters.

Wow. I know this was a lot. I hope I managed to somewhat surface how web performance can connect to economics, to psychology, to neuroscience, to business analytics, and all of that. The cliche is true. You cannot fix what you cannot measure. And it's obvious to a lot of people, but we've always been in this place where we see a new technology in a conference like this, we go to work and we're like, we need to start using this ASAP. So let's measure before. And that's part of building the full picture. That's why it's important that you always try to correlate business analytics with those contextual metrics, with the business outcomes to finally tell the full story of your app.

Strategic Insights on App Optimization

Short description:

It's crucial to correlate business analytics with contextual metrics for a comprehensive app story. Core App Vitals should be seen as a starting point, not the end goal, to avoid gaming metrics. Acknowledging unmet user expectations and incorporating apologies in apps can improve performance perception.

That's why it's important that you always try to correlate business analytics with those contextual metrics, with the business outcomes to finally tell the full story of your app.

The third thing I heard this in a talk by Tim Cadillac, and I couldn't agree more, is things like Core App Vitals, they are a better starting point than a finish line. Because if we try to make them our end goal, we tend to start gaming them and all this kind of thing, and that's not ideal.

The fourth thing is it's good when our apps and we as developers embed in our apps, acknowledging that we didn't meet user expectations. Things like what Windows says, hey, it's taking longer, but we're going to get there and et cetera. There's interesting research on how computers and applications that apologize can alleviate the perception of bad performance. So also think about it. And last point, there's probably a business case as I try to show throughout my talk to make your app faster. But as we saw, especially in the survivorship bias, a lot of times performance is more than just business as usual.

Impact of Web Accessibility on User Experience

Short description:

Web performance is intertwined with web accessibility. Global disparities in bandwidth access persist. Discover more in the QR code. Explore tool internals and engage in Q&A discussions. Germany's impatience highlighted. Celebrate Git Nation's 10th anniversary. Libraries like DXA aid in automating metric measurement, offering insights beyond performance.

That's why I love this part in MDN, when they say that in a sense web performance can be considered a subset of web accessibility. And last screenshot, the conclusion of the post I showed 20 minutes ago, the YouTube case study from 2012, he said, many of us are fortunate to live in high bandwidth regions, but there are still large portions of the world that do not. And this was in 2012, but it's still true. So thank you all for having me. You can find the slides and a lot of things in this QR code. All my other talks about internals of tools are there. And find me at the Q&A or at the conference. It was a pleasure. Thank you so much. Great talk, Mateus. Also, my favourite part of it was the fact that Germany has the least patience of all of the places. Please wear your party hat. It is the 10 year anniversary of Git Nation. And then we can get into the questions. I didn't say that. It was your conclusion about Germany. Well, there were stats. Alright. Let's talk about this because I love the fact that you didn't just talk about the metrics, but you talked about the humanity behind said metrics. So this first question comes from Nico JS, which is about these are amazing measures. Are there any libraries to help automate measuring these rather than having to build like measuring these metrics all by itself? Is there any good libraries for that? Yeah, so I went really, really fast through that slide. But actually, all of those, this is mostly called DXA or digital experience analytics. So there's just a bunch of companies that my company also does that. But there's just a bunch of companies out there. You're going to see like full story and whatnot that they will offer not only that rage clicks, but they will mention that they will help you detect that clicks. They will help you detect all the mouse and scrolling things I talk about. They will help you detect out of frustration indicators that are not necessarily connected to performance like something called back clicking. That is when you navigate multiple times to the previous page. So DXA libraries in general tend to help. And I ended up trimming those slides, but I also had this point where I mentioned how you could implement.

Navigating Data Analysis Biases

Short description:

Not all companies allocate budgets for measuring, prompting DIY solutions. Biases in data analysis can skew results; identifying noise versus signals is crucial. Understanding biases, from developers to user experiences, is essential for accurate data analysis and decision-making.

And sometimes this is worthy to know, because sometimes not all companies will approve budgets to hire for parties to measure this kind of thing. So sometimes you can find a subset of them you want and build them in a couple of few hundreds of JavaScript lines to and just send you some RUM service. That's nice. That's nice.

And last one, when we think about getting stats, getting information, I don't know if this is specifically targeted about the rage, the rage clicking, or the patience one, but sometimes this can be biased in information. And even when we think about these, like accelerations, tracking the mouse, there may be bias because certain things happen on a page. How can you identify the bias that might be causing something and something being like noise versus being a signal? Yeah. So this one is super specific. Maybe the author of the question was talking about our biases developer too. Possible bias that influence. Oh yeah, yeah. So I think, first of all, if you're talking about the bias of the developers, then just self-conscious thing and say, okay, could my data be cluttered by the way I'm analyzing it? And that's why it's important that we know all the keywords. When it comes to our users, it's, yeah, it's always tricky and it's a huge, it depends. Like one of the things I showed you, it could be about performance, but it could also be about your pages, just confusing. There's a high cognitive load. So for this one, I'll stick to a huge, it depends and yeah, try to see where it makes sense, analyze over time and try to clean up. Well, it depends. And you can find out the full answer to that question by meeting Mateus at the speaker rooms.

I know you've got your headphones on, but folks, can we give Mateus a massive round of applause? Thank you so much. Thanks, man.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
How React Compiler Performs on Real Code
React Advanced 2024React Advanced 2024
31 min
How React Compiler Performs on Real Code
Top Content
I'm Nadia, a developer experienced in performance, re-renders, and React. The React team released the React compiler, which eliminates the need for memoization. The compiler optimizes code by automatically memoizing components, props, and hook dependencies. It shows promise in managing changing references and improving performance. Real app testing and synthetic examples have been used to evaluate its effectiveness. The impact on initial load performance is minimal, but further investigation is needed for interactions performance. The React query library simplifies data fetching and caching. The compiler has limitations and may not catch every re-render, especially with external libraries. Enabling the compiler can improve performance but manual memorization is still necessary for optimal results. There are risks of overreliance and messy code, but the compiler can be used file by file or folder by folder with thorough testing. Practice makes incredible cats. Thank you, Nadia!
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.
Optimizing HTML5 Games: 10 Years of Learnings
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Optimizing HTML5 Games: 10 Years of Learnings
Top Content
Watch video: Optimizing HTML5 Games: 10 Years of Learnings
PlayCanvas is an open-source game engine used by game developers worldwide. Optimization is crucial for HTML5 games, focusing on load times and frame rate. Texture and mesh optimization can significantly reduce download sizes. GLTF and GLB formats offer smaller file sizes and faster parsing times. Compressing game resources and using efficient file formats can improve load times. Framerate optimization and resolution scaling are important for better performance. Managing draw calls and using batching techniques can optimize performance. Browser DevTools, such as Chrome and Firefox, are useful for debugging and profiling. Detecting device performance and optimizing based on specific devices can improve game performance. Apple is making progress with WebGPU implementation. HTML5 games can be shipped to the App Store using Cordova.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
Workshop
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
High-performance Next.js
React Summit 2022React Summit 2022
50 min
High-performance Next.js
Workshop
Michele Riva
Michele Riva
Next.js is a compelling framework that makes many tasks effortless by providing many out-of-the-box solutions. But as soon as our app needs to scale, it is essential to maintain high performance without compromising maintenance and server costs. In this workshop, we will see how to analyze Next.js performances, resources usage, how to scale it, and how to make the right decisions while writing the application architecture.
Maximize App Performance by Optimizing Web Fonts
Vue.js London 2023Vue.js London 2023
49 min
Maximize App Performance by Optimizing Web Fonts
WorkshopFree
Lazar Nikolov
Lazar Nikolov
You've just landed on a web page and you try to click a certain element, but just before you do, an ad loads on top of it and you end up clicking that thing instead.
That…that’s a layout shift. Everyone, developers and users alike, know that layout shifts are bad. And the later they happen, the more disruptive they are to users. In this workshop we're going to look into how web fonts cause layout shifts and explore a few strategies of loading web fonts without causing big layout shifts.
Table of Contents:What’s CLS and how it’s calculated?How fonts can cause CLS?Font loading strategies for minimizing CLSRecap and conclusion