Core Web Vitals - What, Why and How?

Rate this content
Bookmark

Performance can make or break a website, but how can you quantify that? In this session we will look at the Core Web Vitals as a way to measure performance on the web. Specifically, we'll go through the history of web performance measurements, where the new metrics come from and how they are measured.

This talk has been presented at TestJS Summit - January, 2021, check out the latest edition of this Tech Conference.

FAQ

Core Web Vitals are a set of metrics that Google uses to assess the user experience of a website in terms of its performance. These metrics include Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS). They are crucial because they help web developers, testers, and SEO specialists understand and improve the performance and interactivity of web pages, directly impacting SEO and user satisfaction.

Core Web Vitals are part of the Page Experience signal used by Google to measure the quality of user experience provided by a web page. From May 2021, this signal influences a website's search ranking. High Core Web Vitals scores can improve a website's SEO ranking, making it more likely to appear in top search results.

Largest Contentful Paint (LCP) measures the time it takes for the main content of a page to become visible to the user, ideally within 2.5 seconds. It is a critical metric for assessing visual loading performance and ensuring that users perceive a website as fast, which can significantly enhance user experience.

First Input Delay (FID) measures the time from when a user first interacts with a page (e.g., clicking a link, tapping on a button) to the time when the browser is able to respond to that interaction. This metric is crucial for understanding the interactivity and responsiveness of a page, aiming for an FID of less than 100 milliseconds.

Cumulative Layout Shift (CLS) measures the sum of all individual layout shift scores for every unexpected layout shift that occurs during the entire lifespan of the page. A low CLS ensures that the page is stable as it loads, preventing elements from shifting unexpectedly and causing accidental clicks, thus enhancing user experience.

Improving Core Web Vitals scores can involve optimizing server response times, reducing JavaScript execution times, ensuring quick loading of major content, minimizing layout shifts through proper styling and layout practices, and enhancing the site's mobile-friendliness. Regular monitoring and testing with tools like Lighthouse and Google's PageSpeed Insights are also crucial.

Google updates the thresholds and metrics for Core Web Vitals approximately every year to align with evolving web technologies and user expectations. These updates ensure that the metrics remain relevant and accurately reflect real-world user experience.

Martin Splitt
Martin Splitt
27 min
15 Jun, 2021

Comments

Sign in or register to post your comment.

Video Summary and Transcription

This Talk provides an introduction to the core of Vitals and its contribution to Google search. It discusses the evolution of website performance metrics and the need to consider factors beyond the time to first byte. The concept of Core Web Vitals is introduced, consisting of three metrics: Largest Contentful Paint, First Input Delay, and Cumulative Layout Shift. The upcoming Page Experience signal, launching in May 2021, will combine Core Web Vitals with existing ranking signals. The Talk also addresses challenges in measuring performance and provides insights on layout stability and visual completeness.

1. Introduction to Vitals

Short description:

Hello and welcome to my session about the core of Vitals. We'll talk about web performance, the core of Vitals, and how it contributes to a Google search. Website performance is about quantifying if a website is fast and delightful for users. It has evolved over time and continues to evolve as our understanding of web performance changes.

Hello and welcome to my session about the core of Vitals, their what, why, and how, more specifically. So this is a testing conference, and I'm always a little humble to speak at testing conferences because I'm not that much into the testing space anymore. I do write tests when I write my code, but you all are probably more experts here than I am. Nonetheless, testing your website performance is an important thing, and the core of Vitals are a tool to accomplish exactly that.

So I think it makes sense to discuss these things. I'll look with you at three different things tonight. First thing first, we'll talk about web performance, or what website performance actually is. We'll talk about the core of Vitals, and then we also will talk about how the core of Vitals will contribute to a Google search in the form of the page experience signal launching in May. So there are some SEO implications or search engine optimization implications from this as well.

So let's start with what is website performance. Intuitively, we all know the answer to this question is a website fast and delightful to use or not. But if you want to compare that between sites, and maybe even between different versions of the same site, it becomes a lot more tricky, because you want something that you can compare and track over time, and intuitive measurements don't really help and don't really tick that box. The goal is to quantify it, to have some sort of number or metric that we can get that tells us if a website is fast and delightful for a user to use or not. As we will see in this talk, this has evolved over time and continues to evolve even today as our understanding of what makes a website fast and performant and delightful for users the web changes and the kind of websites we build are changing. There won't be an easy answer. That's kind of like the spoiler alert. But let's have a look at this.

2. Quantifying Web Page Performance

Short description:

One of the earliest metrics to quantify web page performance is the time to first byte. However, this metric is no longer sufficient to determine if a website is fast and delightful. The website architecture has changed, and bandwidth and connection speeds are not the main bottleneck anymore. A better metric is the overall completeness of the response. For example, a slower website that delivers a more complete response is considered better than a faster website that delivers an incomplete response. Time to first byte is still useful in identifying connection issues, but other factors such as rendering speed should also be considered.

How could we quantify web page performance? One of the earliest metrics has probably been the time to first byte. We would measure how long it takes for the first byte from the server to come back to our computer or device and actually the browser can then start parsing and then eventually rendering the page.

And historically, this has made a lot of sense. So classical websites, like here, this example.com case, our browser would make a request, the web server would respond with the HTML, and then the content would be visible in the browser. There are huge differences and there are a few things and factors that we can influence as website owners and developers to make sure that this is still fast. Like we make sure that our server is fast, has enough memory, has enough capacity, has good network bandwidth. We can also make sure that the server is close enough, physically close enough, because it just physically takes time for data to like electrical or light impulses to travel. If I'm here in Switzerland, the server is in Australia, then this might take a while until the data has made its way to Australia and comes back. It might be lost on the way and then has to be retransmitted. So this can take a significantly longer time than when the server is, for instance, in my own city, I'm living nearby a data center. So maybe if it's like located there, then it's literally just taking like basically no time at all. It's going to be really, really quick. And thus the time to first byte will be a lot shorter than it would be with a server in Australia.

But is this an exhaustive good metric? Is this all we need to quantify if a website is fast and delightful? No. And that's partially because the website architecture has changed over time, but also because bandwidth and connection speeds are not necessarily the biggest bottleneck anymore. So let's look at two websites. I open both websites on the same machine at the same physical location at the same time. I have maybe like, I have two machines next to each other going to the same internet connection, it doesn't really matter. I go to A.example.com and B.example.com, and we assume that these are completely separate servers and completely separate web applications. So these requests go out and A.example.com takes a while. Maybe it's like a classical PHP or Java or Python or Ruby program that needs to run on the server. Maybe it is a server side rendered application that needs to talk to a bunch of backends and APIs and databases to actually fetch the data and then compile the HTML before sending it over the wire, doesn't really matter. The point is it takes a moment, it doesn't matter how long this moment is, it just takes a moment. Whereas B.example.com, on the other hand, has received the request, immediately responds back, and the time to first byte has arrived. And it's HTML, it says load this piece of JavaScript. And now, the next second, B.example.com responds with the full HTML, it has done all the things that it needs to do on the server, and my browser shows me the website, whereas with B.example.com, we are not at the stage where we get the App.js, which then comes back and then probably starts rendering or starts running the JavaScript. Once the JavaScript starts running, it discovers, oh, we need to make this bunch of API requests and these come back, all while the browser still has nothing substantial to show to the user. Now, which of these two websites is better, more delightful and faster, according to a user looking at both browser windows? Well, very clearly, A.example.com, but if you remember, originally, according to the metric of time to first bytes, A.example.com was the slower one, it took longer until we received the first byte of the response, but as we received it, the response was more complete than the other response. So time to first byte is not good enough these days and it has not really been a useful metric. It is still relatively useful because it helps you, if you see, like, oh my website is slow and you see like, actually the rendering itself is really fast and we don't have to wait that much until things are being painted, it's just the connection time and the time it takes for the data to go over the wire and come back, then that's the bottleneck that you need to fix and you can fix that by using a CDN or something.

QnA

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced Conference 2022React Advanced Conference 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.
Optimizing HTML5 Games: 10 Years of Learnings
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Optimizing HTML5 Games: 10 Years of Learnings
Top Content
PlayCanvas is an open-source game engine used by game developers worldwide. Optimization is crucial for HTML5 games, focusing on load times and frame rate. Texture and mesh optimization can significantly reduce download sizes. GLTF and GLB formats offer smaller file sizes and faster parsing times. Compressing game resources and using efficient file formats can improve load times. Framerate optimization and resolution scaling are important for better performance. Managing draw calls and using batching techniques can optimize performance. Browser DevTools, such as Chrome and Firefox, are useful for debugging and profiling. Detecting device performance and optimizing based on specific devices can improve game performance. Apple is making progress with WebGPU implementation. HTML5 games can be shipped to the App Store using Cordova.
Power Fixing React Performance Woes
React Advanced Conference 2023React Advanced Conference 2023
22 min
Power Fixing React Performance Woes
Top Content
Watch video: Power Fixing React Performance Woes
This Talk discusses various strategies to improve React performance, including lazy loading iframes, analyzing and optimizing bundles, fixing barrel exports and tree shaking, removing dead code, and caching expensive computations. The speaker shares their experience in identifying and addressing performance issues in a real-world application. They also highlight the importance of regularly auditing webpack and bundle analyzers, using tools like Knip to find unused code, and contributing improvements to open source libraries.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
Featured WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
WorkshopFree
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
React Performance Debugging
React Advanced Conference 2023React Advanced Conference 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
High-performance Next.js
React Summit 2022React Summit 2022
50 min
High-performance Next.js
Workshop
Michele Riva
Michele Riva
Next.js is a compelling framework that makes many tasks effortless by providing many out-of-the-box solutions. But as soon as our app needs to scale, it is essential to maintain high performance without compromising maintenance and server costs. In this workshop, we will see how to analyze Next.js performances, resources usage, how to scale it, and how to make the right decisions while writing the application architecture.
Maximize App Performance by Optimizing Web Fonts
Vue.js London 2023Vue.js London 2023
49 min
Maximize App Performance by Optimizing Web Fonts
WorkshopFree
Lazar Nikolov
Lazar Nikolov
You've just landed on a web page and you try to click a certain element, but just before you do, an ad loads on top of it and you end up clicking that thing instead.
That…that’s a layout shift. Everyone, developers and users alike, know that layout shifts are bad. And the later they happen, the more disruptive they are to users. In this workshop we're going to look into how web fonts cause layout shifts and explore a few strategies of loading web fonts without causing big layout shifts.
Table of Contents:What’s CLS and how it’s calculated?How fonts can cause CLS?Font loading strategies for minimizing CLSRecap and conclusion