What Is an AnimationFrame and What Can It Tell You?

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Modern web applications need to be responsive and smooth, providing users with immediate feedback for their interactions. To understand how well our applications perform, we need to understand how browsers process and render content. This is why we now have AnimationFrames, a new representation of unit of work that powers INP and LoAF APIs!

This talk has been presented at JSNation 2025, check out the latest edition of this JavaScript Conference.

FAQ

Vinicius's talk focuses on animation frames and the Long Animation Frame API (LOAF), exploring how they can help improve web application performance and user experience.

Vinicius is a lead engineer at PlusOne and is involved in building PerfLab and PerfAgent.

The INP score consists of input delay, processing time, and presentation delay, which are measured within animation frames to assess user interaction performance.

Animation frames provide a framework for measuring and classifying work needed to present a frame, offering insights into user interactions and helping improve web performance.

The Long Task API's attribution model fails to provide detailed insights into which scripts or functions are causing long tasks, offering only timestamps and blocking duration.

Recommended tools include the Long Animation Frame API, Web Vitals for real user monitoring, and self-profiling APIs for capturing traces.

As of now, the Long Animation Frame API is supported in Chrome and other Chromium browsers like Edge, with Safari working on implementation.

The Long Animation Frame API (LOAF) is an API that helps developers identify bottlenecks in their web applications by exposing animation frame entries that exceed a 50-millisecond threshold.

Animation frames are crucial for understanding how browsers process and render content, helping developers create smooth and responsive web applications by measuring work needed to present new frames.

Developers can analyze performance issues by capturing and examining trace files, utilizing tools like the Long Animation Frame API and Web Vitals, and leveraging AI-assisted analysis for better insights.

Vinicius Dallacqua
Vinicius Dallacqua
29 min
12 Jun, 2025

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Vinicius discusses the importance of smooth and responsive web applications, focusing on animation frames and the long animation frames API. Understanding work within frames is crucial for user experience and INP metrics. Categorizing animation frame work helps in performance analysis, specifically in identifying issues with layout and rendering. Attribution models and IMP metric play a significant role in performance analysis using animation frames. Leveraging animation frames for bottleneck detection and visualization of work execution on the main thread is essential for performance optimization.

1. Understanding Animation Frames and Long Tasks

Short description:

Vinicius, lead engineer at PlusOne, discusses animation frames, the long animation frame API, and the importance of smooth and responsive web applications. Chrome's INP metric and long animation frames API help understand user experience. Rail team's guidelines and the 50 millisecond task budget for main thread execution are crucial for user interaction.

Today, I'm going to be talking about animation frames and what can it tell you, and also about the long animation frame API, LOAF, although very nicely, many say it's LOAF, yeah? Why not? Yes, my name is Vinicius, and I'm a lead engineer at PlusOne, also building PerfLab and PerfAgent. And modern web applications need to be responsive and smooth, providing users with immediate feedback for their interactions. So to understand how our applications perform, we need to understand how browsers process and render content. We want to treat our users well and bring exciting new experiences without junk or frustrations.

So building applications can be very hard, and shipping experiences that are smooth and responsive can be very difficult. This is why the Chrome team shipped INP as a metric and long animation frames as an API, to help us better understand and keep track of our users' experience. We'll be talking about some core concepts from browser internals and APIs. So for a catch up on how the event loop and some of those concepts work, there's a very good video from Lydia Haley called JavaScript Visualized that goes into great depth, explaining them in a very approachable way. But let's just take a little bit of time to get to know the basics about the long tasks and why we have a 50 millisecond budget for tasks to be executed in the main thread.

About 10 years ago, in 2015, the Rail team made popular a set of timings and gave us guidelines to keep us, to help us and keep the user's interactions smooth and responsive. So the 100 millisecond window you see in the image, it represents the total amount of time that the browser has to process any given input. But since processing also involves the event loop queuing and input and dispatching the event listeners, tasks are left with a 50 millisecond budget to execute a new domain thread to allow browsers to ship the next frame and serve a 60 frames per second experience. Any task that exceeds this budget may incur visual jank, delaying visual updates for the user.

2. Exploring Long Tasks and Animation Frames

Short description:

Long tasks API lacks insights on task origins, causing issues. Animation frames crucial for INP and user experience. Work within frames impacts INP score categorically. Frames track work pre-frame shipment, aiding performance perception.

For even more backgrounds on the history behind interactivity metrics, you can check another one of my talks called Long Frames and INP, Understanding the Postload Performance. So now, back to long tasks. Though the Rail model introduced us to the concept of a long task, the long task API has a major problem. Though it allows us to collect long tasks that happened over a span of time, its attribution model fails to give us a proper insight over which scripts or functions those tasks may come from, giving us not much more than timestamp and blocking duration time. So we don't really have a good way to understand why those long tasks happened and where those bottlenecks are coming from. So this quote is actually taken from the Google article around the Long Animation Frame API, explaining exactly that such problem from the long task.

Another important point is that if you only consider long tasks as a source of interactivity problems, you're eliminating an entire class of performance problems that has to do with styling and layout. So those can also prevent the browser from responding to interactions and slowing down the production of new frames. So this interaction here on the image, it has a long processing time, but it mostly comes from forced layouts and styling. So this can also incur visual jank. This is why we now have animation frames as the base model that powers INP and long animation frames. So in this talk, we will be checking out the concept around how the browser processes different events and how those events affect different parts of the animation frame, impacting users' experience and URNP score.

An animation frame represents the complete cycle the browser performs in order to process events and present new frames on the screen. This process may include works such as queuing and processing possible user input, executing code, tasks or micro-tasks, processing style and layout, and finally, composing and rendering a new frame. So think of an animation frame as a snapshot of all the work needed to generate visual updates for your users. The work performed in an animation frame is used as an attribution for the three parts of the INP score, which are input delay, processing time, and presentation delay. But from an animation frame perspective, this work can be measured and classified more granularly within its attribution model, and we will see more about that later. Animation frames are present session-wide and, although it is used as a base model for attribution towards INP, it does not require an interaction. It simply measures different types of work that happens before the browser could ship the next frame, collecting timestamps and separating them into different types of work. Because of that, it can be considered as a good candidate for an abstraction when it comes to how browsers process work and how users perceive their experience.

3. Analyzing Animation Frame Work Categories

Short description:

Animation frame work categorized into script compilation, execution, layout, and render work. Identifying issues in layout and rendering crucial for performance analysis. Limitations may lead to missing attribution in animation frames.

From an animation frame perspective, work is divided into different categories. So when you have timings, those categories are divided into the script compilation, script execution, style layouts, work, and render work. So each of them are aggregated into their own part of the attribution model of the animation frame.

Let's come back to this interaction. You cannot immediately tell why we're spending so much time on the layouts work and why are we spending so much of this processing time into rendering work. And since that's the main cause for why the processing time is so long, we would have to dig deeper into the flame graph to find the source of the problem.

It is also important to note that part of the attribution model may not be present in some animation frame, such as the missing attribution slots you have into the image there. That might be due to different limitations currently faced by the platform to accurately or safely infer the information, such as execution time coming from third-party scripts or Chrome extensions.

4. Decoding Animation Frame Attributes

Short description:

Highlighting different sections of work in an animation frame. Not limited to one event per interaction. Importance of attribution model and IMP metric in performance analysis.

And underneath it, it shows the same segment, but highlighting the different sections of work executed in that animation frame. First, you may notice that we have multiple events in one single animation frame. I'll get back that I'll get back to it further ahead. But keep in mind that an animation frame and subsequently, IMP metric and the long animation frame entries are not limited to one event per interaction. So I can actually let me see if I can have this. There you go. So here is the same kind of segment within the performance tab. And you can see for this particular trace, you have different animation frames here, right? So within that, we can check, for instance, if I scroll for this particular animation frame, I have different types of work executed within one animation frame. And it's not always just scripting.

Now back to the presentation. It is also important to note that part of the attribution model may not be present in some animation frame, such as the missing attribution slots you have into the image there. That might be due to different limitations currently faced by the platform to accurately or safely infer the information, such as execution time coming from third-party scripts or Chrome extensions. So now let's take a look at different attributes from the animation frame attribution model, but how it's present into the IMP metric. You might have seen one of these little guys within the interaction track on the performance panel, where on the left and the right whiskers are the inputs and presentation delay. The solid bar in the middle is the processing time.

Let's start with the first UI event, timestamp. This is part of the attribution model within animation frames, coming from the Long Animation Frame API. But it also is known as the input delay. This is the timestamp for the first event, the first interaction event to be processed within a given animation frame. This timestamp represents the interaction event creation and might have happened at any point in time within the current or previous animation frame, depending if there was a delay between the event being created and the event being processed. These attribute provides the data used to calculate the input delay for animation frames that are connected to the IMP event. The processing time encompasses work on the main thread related to tasks or microtasks that are executed during the animation frame. If inspected in the profiler, this group of work will be represented on the flame graph by work such as script compilation, different function calls and callbacks, promise execution, timer callback functions, and also forced style and layouts.

The last section of work before the browser can produce a ship in the next frame is the presentation delay. It consists of any work related to processing visual updates needed after the main thread has finished executing all tasks of the animation frame. This type of work is composed of different parts like style and layout. But style layout is not the only type of work being processed at this stage. Other different APIs are executed between a task execution and the frame being presented on the screen. Work such as CSS selector evaluation and styling computation, precise observer callbacks, view transition promises, and request animation frame callbacks. The work executed at this stage runs a lot closer to the parts of the animation frame that involves presenting a new frame, but they also can contribute to delays before the new frame can be presented, since some of those APIs may also spawn its own microtasks, thus work executed at this stage can also lead to IMP problems.

5. Examining Animation Frame Metrics

Short description:

Importance of animation frames and IMP metric. Long animation frames for bottleneck detection. Visualization of work execution on the main thread.

So, why should we think in terms of animation frames? Back in 2021, in order to empower the next interactivity metric and improve the attribution model for our field data, the Chrome team started research and development around a new standard way of measuring work around user interactions. A year later, IMP became an experimental metric, and in March 2024, it replaced first input delay, FID. Alongside INP, we also got the long animation frames, which is how we started aggregating the INP phases into a new attribution model. So, in the form of animation frames, entries were exposed as part of the performance APIs through the long animation frames API, or LOAF.

Similar to the long task, let's see how the contrast is looking on this one. Okay, I think it's good enough. So, similar to how the long task API works, the long animation frame API exposes animation frames entries that are above 50 milliseconds thresholds, allowing developers to focus on collecting data that might point to bottlenecks around their applications. So, using again the power of annotations from the performance panel, I'm showcasing here a segment of a trace and each animation frame accounted for that segment. This showcases how we can better connect the work executed on the main thread to how visual updates actually are shipped and produced to the user, as we can better visualize how the browser processes work and produces frames.

Using such concepts, I've built a form of Instagram within PerfLab to showcase the distribution of animation frames according to statistics taken from its attribution model, enabling some exploration on how to quickly expose possible bottlenecks within your application on a session-wide scope. So, instead of focusing on one singular point in time based on your INP interaction, you get a better understanding of, or rather a holistic understanding of how your code executes and what kinds of different problems might be happening through the entire session. So, from the API side, the team behind Web Vitals released an update recently that enriches the INP attribution model with buckets. So, these buckets include the total time spent on different types of work performed by INP, the INP interaction. So, you would have a better understanding of how much time is being spent on your animation frame coming straight from the Web Vitals API.

6. Impact of User Interactions on Animation Frames

Short description:

Understanding the impact of user interactions on animation frames and the importance of a larger context in performance analysis.

So, let's come back here on this little demo. So, from this little histogram here, a form of histogram, I can see how much script and layout I'm having executed throughout different sections of time. So, I kind of split the entire timeline into different equal portions of work and I can quickly jump into different parts that might seem troublesome and get a better understanding of what's going on. So, this is not necessarily connected to any particular metric. It's just how my session is going overall. So, I can see here that I have a very long animation frame going on here and I can try to jump into the trace and get a better understanding of this.

Now, let's get back here. So, from the API side, the team behind Web Vitals released an update recently that enriches the INP attribution model with buckets. So, these buckets include the total time spent on different types of work performed by INP, the INP interaction. So, you would have a better understanding of how much time is being spent on your animation frame coming straight from the Web Vitals API. So, as we have established so far, the animation frame can happen session wide and they are a great abstraction we can use to measure and classify work needed to present a frame. So, now let's take a look on how interactions may influence animation frames. So, user interactions may be initiated at any time during an animation frame, taking consideration that animation frames are slots that happen session wide. So, if the event creation happens during the processing time section of an animation frame, the interaction event might be queued and allowed to follow its event handler. So, tasks during that...

So, it won't be within the same animation frame, which would represent the input delay that you see on the image here. But it is also important to separate the concept of interaction and event handlers. As one interaction might include multiple event handlers, such as the key down, key press, and key up events on a single keystroke instance. Each may have their own event handler's tasks to be processed. And in some cases, each event may even be processed over a different animation frame, which means that each event handler could either be part of the input delay or processing time from an animation frame perspective. That's why our NP event might incur multiple long animation frame entries. It is also important to remember that each long animation frame entry might have multiple scripts attributed to its execution time, as application and framework code may be connected to the same task being executed. User interactions are only a part of a larger context when it comes to how our users perceive their experience. Performance problems may have any combination of factors to why things might be slow. But understanding our user's perspective, for that, we need a larger context. And that often is lost if you only think about the individual pieces.

7. Leveraging Animation Frames and Chatbot Analysis

Short description:

Using animation frames for performance discussions and leveraging chatbot for identifying and analyzing performance problems.

That's why animation frames can be a great candidate to help create a more productive discussion around performance problems, by consolidating all different types of work into a model that better represents users' perception, and helping development teams get more confidence and better workflow around their performance tasks. Animation frames are also really, really good, when it comes to attribution model, to give it to AI models.

So this is the PerfLab part. I also have created this little guy, which is a form of chat bot, you can say, where you can extract. I have to take a look at the size of these guys. Let's see this one. It is a chat bot, of sorts, where you can help identify performance problems. Where PerfLab was more of a dashboard per se, this is the smallest subset of what a dev tool can look like. And it's built on the same internals of Google Chrome, the developer tools.

So I extracted that, and generated the standard library that helps me visualize trace, and also extract that out of traces. So you have still INP here, and of course you have animation frames happening. Same kind of concept of the histogram. So it gives me a distribution of how the frames are looking like, and highlighting different problem areas. But also, I can ask, can you... This is hard to type. Can you help me understand my INP score on this trace? So the rich data you have in the animation frame data, attribution data, you can hand to a model, to a large language model, and help you better understand your performance problems as well.

QnA

Extracting Performance Insights from Trace Files

Short description:

Analyzing performance data extracted from trace files using Chrome internals, generating detailed reports, and leveraging animation frame attribution models for insights.

So based on the data that I hand from the trace file, using the same internals as Google Chrome uses within the dev tools, where they also are interacting trace files and flame graphs, it's through different models to help you detect problems. It's the same concept here. I extract the same internals, and then generate a very in-depth report based on that one interaction that happens. You can extract the specific time it happened, you can help you visualize the interaction, and also visualize the flame graph, so you can better understand what's going on. And give you analysis based on that, and help you understand the problems that you're facing. And all of this is powered by the same data of the animation frame attribution model. So the rich attribution model we have, it really, really helps both us, and of course the machine gods to give us insights.

So this talk is based on a write-up that I have. So this is the QR code for the write-up that I have done last year for the Perf Calendar. Give you a little moment to snap it. And that is about it for me. Thank you all. So our first question. Yes. What kind of tools are there for analyzing task-related performances on web interfaces? Well, a really good one that you have... So if you're talking about APIs, you have, of course, the Long Animation Frame API. You will have the Web Vitals. So Web Vitals, the Core Web Vitals, and Web Vitals Scripts. It really helps you to gather information from your application on the Rome perspective, real user monitoring. It's really, really important to get that real user data. So either via your own scripts, reading straight up from the performance entries, your Long Animation Frame data or by the Web Vitals is a good way to do it. Or capturing traces as well. We have the self-profiling APIs, where you can capture traces to do it. Nice. Yeah. It seems like quite a complicated kind of thing to break down. It is, yeah. Actually, so how would you... What advice would you give for somebody looking at a flame graph like that for the first time to try and work out where things are going wrong? That's a good question. Right.

Enhancing Trace Analysis and Performance Metrics

Short description:

Chrome team aiding developers in understanding traces, utilizing AI for trace analysis, and motivating dev teams to enhance performance with key metrics.

So the Chrome team has been on an absolute roll when it comes to helping developers of all sorts of expertise to better understand traces. So within the trace file, within the performance tab, you have the AI ask. So you can select a frame and you can have the AI panel popping on the console and you can ask the AI on boards of Chrome different questions about the trace. I'm building one myself. I think that's very cool. Being able to ask questions about it is one of the superpowers of AI models and their ability to look at a big set of JSON and pick out actually the interesting... Yeah.

Flame graph can be very scary at first. And what for professionals on the field of performance, especially, is information. For many people, it's noise. And getting to break through that barrier is difficult. So that's why I started creating the agent. And that's why Google Chrome is also imbuing the performance tab with AI. That's very cool.

How would you motivate a dev team to work with improving performance and utilizing tools like INP? And I'm going to keep calling it LOAF. Long animation frame. LOAF. It's a good way to call it. I have another article in another talk just about that. So it's why your performance work is not seen. And I would say the TLDR is work with metrics that are important for your product. So performance metrics, they are the general approach. They are how, in general, you should try to fix performance. And you have thresholds as well that will help you better understand different parts of performance. Or overall, help you shape a smoother experience.

Strategies for Performance Issue Identification

Short description:

TLDR: Work with important metrics for your product to improve performance. Understanding the nature of your product and focusing on relevant metrics is key. Identifying performance issues like long animation frames requires data analysis and using trace files for detailed insights.

And I would say the TLDR is work with metrics that are important for your product. So performance metrics, they are the general approach. They are how, in general, you should try to fix performance. And you have thresholds as well that will help you better understand different parts of performance. Or overall, help you shape a smoother experience. But ultimately, it depends a lot on what matters to your product. So if you have a more static web page, INP is not necessarily the best performance to try to fix. So consider how your product... What is the nature of your product? What metrics matter the most? And that's where you start. I was going to say, it's best to tie it to actual business metrics. Exactly. Right, right, right. Yeah. So if INP is important to your business, if people using and interacting with your applications is important, that's the thing. Yeah. Sweet.

Given your insights into performance analysis, if you encounter a performance issue, one of those big old long animation frames, are there any typical steps that you would approach for figuring out the cause? Yeah. So normally it's just dig straight. It depends on what kind of data you have. So if it comes from your Rome tool, some Rome tools have... If Sentry, one of the sponsors, they have actual traces. So that's significantly helpful. But if you're more limited to just metrics, then the first thing it would be to try to profile first. When you only have metrics, numbers, they tell you there is a problem, but not where the problem is. So you need more data. So that only can come through a trace file. Nice. OK. This might be a quick or an easier one.

Cross-Browser Support for Animation Frames

Short description:

Cross-browser support for long animation frames includes Chrome and upcoming support in Safari. Chromium browsers also offer this feature, reaching a significant user base. The agent chatbot utilizes Gemini for handling different performance metrics and trace data with specialized agents.

This might be a quick or an easier one. What's the cross browser support looking like for long animation frames? So long animation frames currently are available on Chrome. Barry's right in front of me there. I'm not sure if it is widely supported yet. I haven't checked to be honest with you. I thought Barry might know that, but he's shaking his head he doesn't know. I put him on the spot there. But yeah, it's available on Chrome. I know that... I've read that Safari is working on shipping as well. So it's widely available. I assume that being in Chrome means it's an edge as well. Yeah, exactly. It's an edge as well. So any Chromium browser has it. So it's a pretty significant portion of users. Awesome.

We do have one little interlude into your agent chat bot. And somebody would like to know which LLM is used for the agent. Gemini. Gemini. All right, there you go. It's different agents. I'm orchestrating with different agents because data for a profile file is rich and very, very large. So there are different experts, agents that I'm building. One just for NP, one just for LCP. And then from that, I just hand different portions of the trace to each specialized agent. Yeah, very cool. Very cool. But it's Gemini.

Utilizing Gemini for Performance Questions

Short description:

The use of Gemini for the agent chatbot system was successful in tailoring prompts for performance-related questions. Long animation frames help understand WebGPU performance by visualizing interactions with the main thread and potential presentation delays. The Q&A session is starting with a focus on performance-related inquiries.

It's Gemini. It's Gemini all the way through. Nice. Did you try a few out to get the right one? Yeah. Yeah, I did. But I mean, since I'm very deep embedded into the DevTools source code and they use Gemini and I could see how the prompts are worked out, I figured Gemini training probably would also have read more about the metrics I want to care about. So my system prompt actually is more tailored towards that and trying to focus the agent to answer performance-related questions. And it worked very well from the get-go.

Nice. And what I'm going to do, one more last one question because we're just kind of running out of time here. But I think this is interesting. Does this long animation frame, kind of framework, help understand performance issues in WebGPU as well as the web in general? It would help in a little bit. So the WebGPU code is not executed in the main thread. The WebGPU is executed in its own separate thread. So it would help you understand when that code would interact with the main thread. And also, if it produced any kind of presentation delay during the render part, then you would also be able to visualize the different timings. But when it comes to attribution within animation frame, I would say, not as much. Okay. Yeah. Fair enough. Yeah, the fact that it's handing off to something else, that makes sense. Yeah. Okay. Well, I'm going to cut that there. But I want everybody to give another thank you to Vinicius. Thank you.

There's a Q&A section now. You're heading downstairs to answer Q&A.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
How React Compiler Performs on Real Code
React Advanced 2024React Advanced 2024
31 min
How React Compiler Performs on Real Code
Top Content
I'm Nadia, a developer experienced in performance, re-renders, and React. The React team released the React compiler, which eliminates the need for memoization. The compiler optimizes code by automatically memoizing components, props, and hook dependencies. It shows promise in managing changing references and improving performance. Real app testing and synthetic examples have been used to evaluate its effectiveness. The impact on initial load performance is minimal, but further investigation is needed for interactions performance. The React query library simplifies data fetching and caching. The compiler has limitations and may not catch every re-render, especially with external libraries. Enabling the compiler can improve performance but manual memorization is still necessary for optimal results. There are risks of overreliance and messy code, but the compiler can be used file by file or folder by folder with thorough testing. Practice makes incredible cats. Thank you, Nadia!
Optimizing HTML5 Games: 10 Years of Learnings
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Optimizing HTML5 Games: 10 Years of Learnings
Top Content
Watch video: Optimizing HTML5 Games: 10 Years of Learnings
PlayCanvas is an open-source game engine used by game developers worldwide. Optimization is crucial for HTML5 games, focusing on load times and frame rate. Texture and mesh optimization can significantly reduce download sizes. GLTF and GLB formats offer smaller file sizes and faster parsing times. Compressing game resources and using efficient file formats can improve load times. Framerate optimization and resolution scaling are important for better performance. Managing draw calls and using batching techniques can optimize performance. Browser DevTools, such as Chrome and Firefox, are useful for debugging and profiling. Detecting device performance and optimizing based on specific devices can improve game performance. Apple is making progress with WebGPU implementation. HTML5 games can be shipped to the App Store using Cordova.
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
Workshop
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
High-performance Next.js
React Summit 2022React Summit 2022
50 min
High-performance Next.js
Workshop
Michele Riva
Michele Riva
Next.js is a compelling framework that makes many tasks effortless by providing many out-of-the-box solutions. But as soon as our app needs to scale, it is essential to maintain high performance without compromising maintenance and server costs. In this workshop, we will see how to analyze Next.js performances, resources usage, how to scale it, and how to make the right decisions while writing the application architecture.
Maximize App Performance by Optimizing Web Fonts
Vue.js London 2023Vue.js London 2023
49 min
Maximize App Performance by Optimizing Web Fonts
WorkshopFree
Lazar Nikolov
Lazar Nikolov
You've just landed on a web page and you try to click a certain element, but just before you do, an ad loads on top of it and you end up clicking that thing instead.
That…that’s a layout shift. Everyone, developers and users alike, know that layout shifts are bad. And the later they happen, the more disruptive they are to users. In this workshop we're going to look into how web fonts cause layout shifts and explore a few strategies of loading web fonts without causing big layout shifts.
Table of Contents:What’s CLS and how it’s calculated?How fonts can cause CLS?Font loading strategies for minimizing CLSRecap and conclusion