So let's start today's talk with a very deep dive into Google Lighthouse. I am pretty sure that most of you used that tool at least once in your professional career. And it's hard to be surprised. This audit is available through browser extensions, external pages. It can also perform directly from the Chrome DevTools. Just click on the Audit tab, run the test on any website, and voila, it will give you a very detailed information on how well your page is performing against certain metrics, aiming to describe the real experience of your users as accurately as possible.
So Livehouse basically boils down performance of a website to a single number, between one and one hundred. So it could also be treated as a percentage, where its score below 50 is treated as a bad, and score above 90 is treated as good. I will come back to this topic, because it could be surprising that only 10% is treated as a good one. But, okay, going back to the topic. Where do these magical numbers come from? So it turns out that each metric has its own weight that corresponds to their impact on overall experience of end-user. And as you can see here, if we improve total bocking time, largest content will paint or cumulative layout shift, we will improve our Livehouse score much faster than optimizing other metrics. Of course, it is important to focus on all of them, as each is describing different piece that zooms up to overall experience of your users, but there are some metrics that are identified by Google as more important than others, in a sense that their impact on the user experience, the perceived user experience, is bigger. Of course, everything is subjective, but this is what the data says, so we have to believe.
OK, so we know how the Google Livehouse score is being calculated overall, but what we don't know is basically how the score of individual metrics is being calculated. What do I mean by this? So let's say we have a total blocking time. How do we know that one second is a good or bad result? From where? We know this. And Google is using a real-world data collected by HTTP Archive. And if you don't know what HTTP Archive is, I really, really encourage you to check it out, because it's just awesome. It contains a lot of useful information about the web in general and how it's being used from both real users and problems. And it can help you understand what is important and how your website performs against others in your or different fields. And I am warning you, some of the information you'll find there could be really, really depressing or at least surprising. For example, my favorite example, actually, you can learn that it's 2021 and the average amount of compressed, I repeat, compressed JavaScript data shipped by like a statistical website, it's almost 450 kilobytes on mobile. 450 kilobytes of compressed data on a mobile device. That's a few megabytes of uncompressed JavaScript. For mid or low end devices, if you result in a really, really, really bad experience and loading time, poof, it will allow you to make a coffee, come back and still wait until your website is ready. Really, like there's a lot of examples where you can see that some page is loading 13, 14, even 15 seconds on a mobile device. Would you wait that long? I don't think so. OK, going back to the main topic. Based on the data for each metric from HTTP Archive, Google is setting the range for good, medium or bad score.
Comments