Long Frames and INP – Understanding the Post Load Performance

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Most developers spend a lot of time optimizing for load time performance metrics, but most of the user's experience is after load. Lets understand what INP mean to your React application, how to utilize the new tolling around interactions such as the new Long Frame API alongside the upcoming INP metric to optimize our runtime performance. 

This talk has been presented at React Advanced 2024, check out the latest edition of this React Conference.

FAQ

Vinicius is a tech lead for Gillian and has worked for companies like Volvo, Spotify, and Kleiner. He has six years of experience in performance optimization.

Long frames and INP are metrics that help assess the post-load performance of web applications, focusing on user interactions and responsiveness beyond the initial load time.

Post-load performance is crucial because users spend most of their time interacting with the application after it loads. Improving this can enhance user experience and reduce frustration.

Introduced in 2015, the RAIL model stands for Response, Animation, Idle, and Load. It provides a framework for understanding user interactions and improving responsiveness.

INP assesses overall page responsiveness by focusing on the longest interaction observed, replacing the first input delay as a core metric to consider post-load performance.

The Long Animation Frame API helps identify performance issues by providing detailed data on the execution time and delays in animation frames, improving attribution models for performance issues.

Developers traditionally focus on load time because slow loading can lead to high bounce rates and user frustration, often sidelining ongoing interaction performance.

The Long Task API lacks detailed attribution, failing to identify specific scripts or functions causing performance issues, and does not account for styling and layout tasks.

It provides a more comprehensive breakdown of work done during animation frames, offering better insights into delays caused by various tasks, including styling and layout.

Vinicius Dallacqua
Vinicius Dallacqua
23 min
28 Oct, 2024

Comments

Sign in or register to post your comment.
  • Va Da
    Va Da
    P4
    It is amazing how much can be performantly rendered using a <canvas> element.
Video Summary and Transcription
Welcome to my talk on long frames and INP, understanding the post-load performance. The rail model introduced in 2015 provides recommendations for responsiveness and organizing JavaScript work around user input. Metrics like FCP, FMP, and LCP measure visual completeness, while metrics like first interactive and CPU idle assess CPU idleness. Total blocking time measures the impact of long tasks on page responsiveness, and TTI assesses when the browser can respond to interactions reliably. INP is a user-centric responsiveness metric that encompasses all potential causes of slow interactions. The long animation frame API improves attribution and helps pinpoint performance problems. Tools like Vercel toolbar and trace viewer on PerfLab can visualize interaction to next paint. INP and long animation frames provide valuable insights for improving web experiences.

1. Introduction to Long Frames and INP

Short description:

Welcome to my talk on long frames and INP, understanding the post-load performance. Apps or websites that load slowly and has a bad loading experience can lead to frustration and high bounce rates. But forgetting the post-load part is something that can also happen and that's where most of our users spend most of their time. In order to truly appreciate those amazing additions to our toolkit, we need to get a bit of a history lesson on how performance metrics got here, why they evolved the way they did and how we got here on this long journey. Let's take a look at the rail model, introduced in 2015, which stands for response, animation, idle, and load. Each part of the rail model comes with a set of recommendations for responsiveness and organizing our JavaScript work around user input.

Hello, everyone. Welcome to my talk on long frames and INP, understanding the post-load performance. My name is Vinicius. I work as a tech lead for Gillian and I have worked for Volvo and Spotify and Kleiner before.

I have worked with performance for about six years and you see some similarities on every company you work for when you work for performance for this long. We are all somewhat familiar with at least some form of performance debugging and that be Lighthouse or DevTools. We are also all somewhat familiar with performance metrics and sometimes with some metrics more than others. So we are familiar with Barbados. But most times when we talk about performance, we are biased towards the load time and it has been like this for a long time. We focus on delivering a good loading experience and there is a good reason for that. Apps or websites that load slowly and has a bad loading experience can lead to frustration and high balance rates. But forgetting the post-load part is something that can also happen and that's where most of our users spend most of their times.

And for a long time, even responsiveness metrics were somewhat bound to the load time. So how long does it take for the application or the page to be interactive? How long does it take for the browser to respond to the first user input? Or how long did the JavaScript passing block the main thread during the loading time? All of those are very much focused on the loading experience. That's why we now have metrics such as INP and NAPI such as long animation frame. And they help us decipher and improve interactiveness. But I'm getting a little bit ahead of myself there. In order to truly appreciate those amazing additions to our toolkit, we need to get a bit of a history lesson on how performance metrics got here, why they evolved the way they did and how we got here on this long journey. You can only connect the dots looking backwards. And a bit of context can give a great perspective on the reasons why users, why we use the metrics the way that we do and why techniques got shaped the way they did nowadays. So let's take a look on those early days and see how the landscape looked like. Why tools like such as rail model were shaped the way they did and how they helped pave the way to interactivity metrics to come. All the way to INP and long animation frames.

First, let's take a look at the rail model. It was introduced in 2015 and it stands for response, animation, idle and load. Response being responsiveness, not HTTP response. It is a significant step forward on how we think about users' perception and interactions. Each part of the rail model comes with a set of recommendations. So let's focus on what we have for response. So it again stands for responsiveness and it started making more popular a set of timings for referencing on how we divide work in the main thread and how we should think and organize our JavaScript work around user input and how long should we take to respond to user input and why.

2. Understanding Interactivity Metrics

Short description:

The rail model dictates a 100 millisecond window for processing user input and shipping the next frame. Metrics like FCP, FMP, and LCP helped measure visual completeness, while metrics like first interactive and CPU idle assessed CPU idleness. However, there was a lack of good ways to measure performance problems around user interactions. The total blocking time metric aimed to measure the impact of long tasks on page responsiveness. TTI, or time to interactive, assesses when the browser can respond to interactions reliably.

So the rail model, it kind of dictates this 100 millisecond window as the total amount of time to process a user input and ship the next frame. But since processing also involves the event loop queuing the user input and dispatching the listeners, tasks were left with a 50 millisecond budget to respond within a frame window that could serve a 60 frames per second experience. Any task that exceeds that budget may incur visual jank and delay the visual update, resulting on a frustrating user interaction which leads us to the early days of interactivity metrics.

Although those metrics were almost entirely focused and biased towards the loading experience, it was a great starting point. With metrics such as FCP, or first contentful paint. With metrics such as FMP, first meaningful paint, and LCP, largest contentful paint, it helped us measure the visual completeness and metrics that gravitated around CPU idleness after loading times, such as first interactive and first consistently interactive, which later got named to CPU idle, to first CPU idle, sorry, and time to interactive. But we lacked good ways to measure and understand performance problems around user interactions. Our first metrics to assess interactivity is still measured around events that happened during or right after the page loads time. Assessing how much time it took for the browser to download and pause assets and be able to start responding to user inputs.

There was another metric that came also around that time, after those, called the total blocking time. And although it was a lab metric and lab tools primarily assessed it in correlation to time to interactive, it was a step into the right direction on trying to create a metric around the impact of long tasks on the page responsiveness. So let's take a closer look at how those metrics were computed and understand some of their shortcomings. Let's look into TTI or time to interactive again. So it was first conceived as first consistently interactive, it tries to assess when the browser is able to respond to interactions reliably. I think that the original name conveys the concept really well, but it does not have a catchy acronym.

3. Responsive Metrics: INP

Short description:

The time to interactive and first input delay metrics were introduced to assess the browser's ability to respond reliably to user interactions. Total blocking time measures the impact of long tasks on the main thread, while the long task API allows for granular access to individual long tasks. However, the basic attribution model of the long task API lacks insight into specific scripts or functions. Considering only long tasks as a source of interactivity problems neglects performance issues related to styling and layout. To address these shortcomings, the Chrome team developed INP as a user-centric responsiveness metric that encompasses all potential causes of slow interactions.

I think that the original name conveys the concept really well, but it does not have a catchy acronym. So the graph taken from the original Google Docs on the metric in the case study for the metric, it showcases a series of events that the browser would take in order to be able to compute time to interactive. And we also can use the official chart taken from the web.dev first input delay article to see how the time to interactive and first input delay are connected.

So the time to interactive, it measures backwards the total time for the browser that the browser took from the start of the response all the way to when the browser has a five second window of five second quiet window. And in order to assess and gauge how the browser is able to respond reliably to user interactions. First input delay measures the very first interaction and assesses any delay on handling input. The first input delay metric was a good start. In attempt to assess the disruption and jank on serving the first interaction, give you a number in milliseconds on how much time that interaction response got delayed with them with the metric at first input delay came with some eye and the metric also came with some recommendations on how to avoid slow interactions. With that, we also got total blocking time as a lab metric to help us measure the total impact on long tasks.

So this metric is not necessarily connected to user interactions, but instead it measures how much work is being done on the main thread which can impact the user's interactions and visual updates that need to happen during that time. This metric though used by lab tools such as Lighthouse to measure the impact on long tasks during the load time is not necessarily a load time metric, but instead measures a measurement of long tasks blocking the main thread over time. Since the rail model's inception, we try to understand the impact of a busy main thread on our users experience with fancy acronyms and mental models to help us divide work and better understand the impact of long tasks when trying to keep a responsive and delightful user experience as a key result. Speaking of long tasks, the long task API is another important milestone. If total blocking time is the total blocking time over a period of time, the long task API provides us granular access to all of the long tasks that happened over such a period of time, giving you some form of enumeration and basic attribution model over those entries. Although it was a step into the right direction, it did not solve our problems. The basic attribution model gives us more granularity and some information on each long task, but it fails to give us proper insight over which script or function it might have been attributed to, giving you not much more than a timestamp and a total blocking duration. This quote actually comes from the article on long animation frames on the shortcomings of the long task API, which is another point to another point is that if you only consider long tasks as a source of interactivity problems, you're eliminating an entire class of performance problems that has to do with styling and layout, that also will occupy the main thread preventing the browser from responding to interactions and slowing down the production of new frames.

And so with that knowledge, the Chrome team started investigating on how could they create a better user-centric responsiveness metric, one that could observe not only the load time, but post load time as well, as part of the user experience. And also encompasses all of the parts that might be causing slow interactions. This image is taken from the article towards a better responsive metrics that precedes the announcement of INP. And you can already see a lot of resemblance on how INP as a metric functions and identify some of its parts. You can see the input delay section, the processing time section, and the next frame being shaped as a full account of the interaction duration. This is also represented on DevTools, nowadays at least. On the interactions track on the performance panel, you see the input delay and the presentation delay as whiskers on each side of the of the input event. And the processing time as the solid bar, representing the totality of the interaction duration. So all of those three parts account for the interaction. And with all the call frames within the stack being displayed underneath on the main thread track, which now gets us back to INP and long animation frames. So let's start with INP. INP, as a Core Web Vitals, tries to assess the page overall responsiveness by giving you a score based on the longest interaction observed. INP, similar to CLS, is a session-wide metric.

4. Responsive Metrics: INP and Animation Frames

Short description:

INP replaced first input delay as a Core Web Vital, accounting for all parts of an interaction. The long animation frame API improves attribution and provides richer data on work segments, including start time, duration, render and style/layout stages, and blocking duration. This helps pinpoint performance problems and identify scripts causing long execution times.

And it replaced first input delay as a Core Web Vital on March 2024. One important caveat is that not to penalize highly interactive websites and applications, the highest interaction over every 50th interaction over the session duration will be dropped. So this is to avoid random glitches, random long tasks on the total session duration to be accounted for your interaction to NextPaint. Here, we have a great visualization on INP coming from the article on web.dev that showcases what is considered an interaction and showing all of the different parts of the interaction to NextPaint score as a timing. Every part of the interaction duration can be accounted as the attribution model for the INP score. So you can see the input delay, you can see the processing time, and the presentation delay accounting for all parts of the input or the interaction.

Which brings us to the long animation frame API. It introduces us to the concept of an animation frame, which is similar to the INP metric shifts our perception of how we section portions of work on the main thread. We can use the same model from the INP article to understand what an animation frame is. Same as the INP metric, it goes from solely focusing on the JavaScript part to focusing on all of the work needed to ship a new frame. From any delay in execution caused by a previous task, to the task execution time during the animation frame, to any delay caused by styling and layout. This helps us better understand any problem that might lead to slow interactions or jank. As it focuses on how the user perceives performance. Connecting visual updates and responsiveness to a single attribution model.

It also is important to notice that long frames, they can happen at any point in time. As a script execution comes from many different sources and trigger different side effects including visual updates without direct user input. So animation frames are really helpful to help us identifying all sorts of different problems connected to responsiveness. And as a model, the animation frame help us to better section parts of portions of work on the main thread, accounting for all things that might be delaying the next frame. Meaning that the long animation frame API and animation frames as a unit of work is a much better attribution model to work with. It improves where it provides us with a much, much richer data as an attribution model as well. Similar to the long task API, we have a time stamp and a duration. But also we have a breakdown of all the segments of the animation frame as a unit of work. So you can see we have the start time and duration. We have the render star and the style and layout star and blocking duration as well. Alongside with any input that might have happened during the animation frame. With that data provided by the animation frame entry, we can section all of the units of work that happened during the that interval or animation frame and understand where it got, where it spent the most time on. So you have the start time and end time that can be easily calculated, but also the total work duration, that is JavaScript execution time, the render duration, pre-layout duration and style layout duration. So this will really help you pinpoint problems during that animation frame very consistently. It also provides with a better attribution when it comes to any possible script that could have resulted on a long execution time. Giving you separate timing per source and details such as which script and function called it.

5. Understanding Animation Frame Attribution

Short description:

The attribution data on animation frames helps with better attribution and understanding of interaction to next paint. Visualizing interaction to next paint can be done with tools like the Vercel toolbar and the trace viewer on PerfLab. Long animation frames can help identify long tasks and frames, but they have their shortcomings in terms of script attribution. It's important to remember that script attribution is limited to scripts running in the main thread and can be affected by third-party scripts and missing source information. INP and long animation frames provide valuable insights for improving web experiences.

So you can see that we have the any different sections on who called and separate timing data source for a better attribution and also source details.

Here's a quick example of how the animation, sorry, here's a quick example on how the attribution data looks like when you put it all together on an animation frame entry. It's important to note that the interaction to next paint entry might have more than one long animation frame entry. Since the input delay attribution model would also be connected to its own long animation frame entry. It is also important to remember that long animation frame entries might have multiple scripts attributed to its execution time.

Here we have two examples of how you can visualize interaction to next paint in the wild. On the left you have the Vercel toolbar showing a collection of inp entries on dev mode and it's showing all the different parts of that of each interaction as a breakdown with the timings. And on the right you have the trace viewer on a tool that I'm creating called PerfLab. You can see the interaction to next paint entry highlighted on the trace displayed above and alongside it you can see report cards with different timings for the same trace. This is another trace visualized on PerfLab. And I wanted to showcase animation frames that can happen on any given time. So this long animation frame entry is not directly linked to any interactions, thus does not account for any inp. As this trace was done over a load time trace that I wanted to assess the impact of different third parties on the total loading duration for a trace session. And but long animation frames can help you identify long tasks and well long frames that happen at any time. So we can better understand the impact and different things that happens over any time.

But it's important to understand that long animation frames also have their shortcomings. Where a great addition as it is, is not free from some things that is very important to remember. So here we can see the all the scripts that happened during the timeline because this is a trace that I captured and I am highlighting the entry itself that I capture as long as a long animation frame or as an animation frame entry within the trace that I captured. But an important thing to remember, that if you're capturing animation frames or inp metrics in the wild without a trace, it's important to remember a few things. That script attribution can only come from scripts running in the main thread of the page. That includes same origin iFrames. As long animation frames can also be caused by third party scripts executed on your website or even Chrome extensions, it can impact your animation frame attribution and inp attribution by having a longer execution time but a missing script attribution. There is also a problem as of the time of writing of this presentation at least, where script without source information such as event handler callbacks and inline scripts also will be lacking attribution data within the attribution model, at least for the time of this.

We've come so far from the early days of performance tooling and metrics and now have such incredible tools at our disposal to help us better understand and improve experiences on the web. It is a truly incredible journey with INP and long animation frame as the latest entries to the toolkit and we can finally have a better understanding of user interactions and better deliver delightful post-load experiences to our users. Thank you very much. My name is Vinicius and I'm on X or Twitter as WebTwitter. Feel free to reach out and thank you. Have a great day.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
How React Compiler Performs on Real Code
React Advanced 2024React Advanced 2024
31 min
How React Compiler Performs on Real Code
Top Content
I'm Nadia, a developer experienced in performance, re-renders, and React. The React team released the React compiler, which eliminates the need for memoization. The compiler optimizes code by automatically memoizing components, props, and hook dependencies. It shows promise in managing changing references and improving performance. Real app testing and synthetic examples have been used to evaluate its effectiveness. The impact on initial load performance is minimal, but further investigation is needed for interactions performance. The React query library simplifies data fetching and caching. The compiler has limitations and may not catch every re-render, especially with external libraries. Enabling the compiler can improve performance but manual memorization is still necessary for optimal results. There are risks of overreliance and messy code, but the compiler can be used file by file or folder by folder with thorough testing. Practice makes incredible cats. Thank you, Nadia!
Optimizing HTML5 Games: 10 Years of Learnings
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Optimizing HTML5 Games: 10 Years of Learnings
Top Content
Watch video: Optimizing HTML5 Games: 10 Years of Learnings
PlayCanvas is an open-source game engine used by game developers worldwide. Optimization is crucial for HTML5 games, focusing on load times and frame rate. Texture and mesh optimization can significantly reduce download sizes. GLTF and GLB formats offer smaller file sizes and faster parsing times. Compressing game resources and using efficient file formats can improve load times. Framerate optimization and resolution scaling are important for better performance. Managing draw calls and using batching techniques can optimize performance. Browser DevTools, such as Chrome and Firefox, are useful for debugging and profiling. Detecting device performance and optimizing based on specific devices can improve game performance. Apple is making progress with WebGPU implementation. HTML5 games can be shipped to the App Store using Cordova.
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
Workshop
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
High-performance Next.js
React Summit 2022React Summit 2022
50 min
High-performance Next.js
Workshop
Michele Riva
Michele Riva
Next.js is a compelling framework that makes many tasks effortless by providing many out-of-the-box solutions. But as soon as our app needs to scale, it is essential to maintain high performance without compromising maintenance and server costs. In this workshop, we will see how to analyze Next.js performances, resources usage, how to scale it, and how to make the right decisions while writing the application architecture.
Maximize App Performance by Optimizing Web Fonts
Vue.js London 2023Vue.js London 2023
49 min
Maximize App Performance by Optimizing Web Fonts
WorkshopFree
Lazar Nikolov
Lazar Nikolov
You've just landed on a web page and you try to click a certain element, but just before you do, an ad loads on top of it and you end up clicking that thing instead.
That…that’s a layout shift. Everyone, developers and users alike, know that layout shifts are bad. And the later they happen, the more disruptive they are to users. In this workshop we're going to look into how web fonts cause layout shifts and explore a few strategies of loading web fonts without causing big layout shifts.
Table of Contents:What’s CLS and how it’s calculated?How fonts can cause CLS?Font loading strategies for minimizing CLSRecap and conclusion