Building a Fast Website for Every Single Visitor

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Learn how to build fast, adaptive web applications that dynamically respond to user conditions and context in this informative talk. Discover the principles and techniques behind adaptive design, including responsive layouts and dynamic interactions, optimized for different browsers, device strengths, internet speeds, screen sizes, and user preferences. Explore the role of data-driven decision-making and user analytics in tailoring content and features quickly and efficiently based on these variables. Gain practical insights into implementing fast, adaptive web apps using various technologies and frameworks. Understand the significance of user testing and feedback loops in refining the application for a seamless and speedy user experience. Walk away with actionable strategies to create personalized, high-performance web experiences that drive engagement and success.

This talk has been presented at React Advanced 2024, check out the latest edition of this React Conference.

FAQ

The talk focuses on building a fast and accessible website for every user, emphasizing the importance of considering user perception and adaptive implementation.

User perception is crucial because it varies depending on the user's context, such as their device and connection speed, which affects how fast or slow a website feels to them.

The two methods are lab testing, which simulates user experience, and field testing (or real user monitoring), which collects real data from users.

Developers can use responsive images with the 'source set' attribute to provide different image sizes for different devices, ensuring better performance and user experience.

Browser list helps bundlers like Webpack and Babel to know which features to polyfill or transpile, ensuring compatibility with the specified browsers and reducing unnecessary code.

Developers can use the 'window.navigator.deviceMemory' API to get an approximate number of gigabytes of RAM on the user's device, allowing them to tailor the user experience accordingly.

The 'Hardware Concurrency' API provides the number of logical processor cores, helping developers optimize performance by tailoring resource-intensive tasks based on the user's device capabilities.

A user's internet connection speed can drastically impact how quickly a website loads, affecting their overall experience. Developers can use APIs to detect connection speed and adjust content delivery accordingly.

The 'Effective Type' API gives an estimate of the user's network conditions (like 4G or 3G). It is controversial due to privacy concerns, as it can potentially be used to infer a user's location and movements.

The Aurora project collaborates with frameworks to optimize image delivery, providing an image component that is optimized out of the box for better performance.

Medhat Dawoud
Medhat Dawoud
31 min
25 Oct, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
This talk focuses on building a fast and accessible website for all users, highlighting the importance of performance and user experience optimization. It emphasizes the need for adaptive implementation to cater to different devices and user conditions. The talk also discusses the factors beyond the developer's control, such as screen size, browsers, devices, internet connection, and sitting position. It highlights the significance of optimizing image components for various devices and the role of browser support and rendering engines. The speaker discusses the use of future APIs and the challenges of browser compatibility, as well as optimizing image formats and bundler compatibility. The talk provides insights on controlling bundler and device compatibility, optimizing CPU usage, internet connection, and JavaScript form submission. It concludes with a proposal to respond with save data instead of effective type for limited internet connections and recommends using React with adaptive hooks for better user experiences. Overall, the talk covers essential aspects of building a fast and accessible website.

1. Building a Fast and Accessible Website

Short description:

Hello, everyone. This talk is about building a fast and accessible website for every visitor. We cannot control various aspects of the user's browsing experience, such as screen size, internet connection, browser version, and device strength. As developers, we have the privilege of using the latest technology, but there is one thing we should not simulate: the user's sitting position. Building a fast website is essential, but accessibility is equally important. Let me introduce myself as Mehtad Dawood, a senior software engineer at Miro and a Google developer expert in performance.

Hello, everyone. It's so good to be here for the first time speaking in the UK. So hi, London. And hi to the people watching us online and people who are going to watch us later on in recording as well.

This talk as a disclaimer is a bit longer. So if you'd like to find me to ask questions later on, if I couldn't pick up anything on the stage, please find me anyway, not only afterwards, during the day I'll be available.

This talk is about building a fast website for every single visitor. And every single visitor is a part that I'm going to just make it emphasized today. And I will start with a quick question. What makes a web user so special, so challenging? I will answer this one. So we have some stuff that we cannot control as web developers or engineers, especially where they are going to browse your website, in which screen size, which internet connection, even without internet connection sometime, which browser, not even which browser, which browser version, we cannot control that. And also what device they are using, how strong or weak the device they are using. Not even we can control how they are sitting browsing your website, right?

In the other hand, we developers are very privileged. We should feel that because we always use the latest technology, latest devices, and we need that. There's a reason for that. We're running a lot of hard things and we're running the best internet usually, let's say. And tend to use the latest version of any browser so we can make use of any UI. However, there is only one thing that we are simulating the user for, and we should not, which is a sitting position. This is the only thing we don't need, but there you go.

So we need not only to build a fast website for every single visitor, we need it also to be more accessible. It's all about accessibility here. It's not only about being fast. Fast is only one factor of that.

So allow me to introduce myself. Thanks Matim for giving an introduction. My name is Mehtad Dawood. I'm a senior software engineer working for Miro. And I'm also a Google developer expert in performance. If you like what you see today, you can follow me on Twitter or find some articles I'm writing on my blog. So let's jump back to what we do.

2. Optimizing Performance and User Experience

Short description:

Performance depends on user perception. Different modes of transport may appear faster or slower depending on the observer's perspective. To measure performance, there are two ways: lab testing, which simulates the user experience but is not accurate, and field testing or real user monitoring, which provides detailed information about user behavior and device distribution. It is important to personalize the website experience based on user needs and optimize for accessibility. Choose a target and aim to build a great user experience.

Performance only depends on the user perception. There is no, nothing that you can perceive similar to what I can perceive. So imagine yourself on a skateboard and you have seen someone with a bike. What you would feel? He's faster than you, right? And if you're in a bike and you see someone in a car, he's still, this is faster than you. But if you are in the car and you see someone in a Ferrari, you'll feel that still he's getting faster. So it depends really where you are. Every time you are on the right and moving to the left, you feel that this is faster and all the way around. If you are in the car, you will feel the cyclist is even slower. So just to make sure that you understand this one. So would a motor bike or motorcycle would be faster or slower? It's really depending where you are. Right? So for those people, you will feel that, okay, this is relatively slower. Okay. And for those people who will feel this is fast. So we need to learn how and where the user stands.

To measure performance, simply we have two basic ways. First one is lab testing, which is you are simulating how the user is going to experience this, and you have to have enough data to decide whether it is fast or slow, how they are doing that. And this is not accurate. It really depends again on your great devices that you're using. You do have a M1, you look for M2 or M3 and the behavior based on lab testing for an M3 is different than an Intel. I'm talking from experience and the other way, which I'm more interested in today, which is field testing or also known as Rome testing, real user monitoring. And for this one, you get tons of information, including core vitals, different metrics. You get also device distribution. You also get the connection distribution. And you will be surprised if you run this on your website or any application that you're building. You'll find a lot of people are still using 3G or 2G, so it's not very accessible for them. And based on this information, you will recognize that there's no one size fits all. You have to make it very personalized for the user and how they are feeling your website. It's all about perception again. So you need to pick a target. Either you build it for the best user conditions and you build the glory user experience.

3. Adaptive Implementation for Best User Experience

Short description:

Poor accessibility for low-end devices and a great experience for high-end devices are not acceptable. We should aim for an adaptive implementation that provides the best user experience and accessibility for everyone. Progressive enhancements and graceful degradation allow for a wider spectrum of experiences based on user conditions.

This will give a poor accessibility for people with low end devices. However, it's a great experience for the people with high end devices like iPhones or something. Or you go with a guaranteed condition. Imagine that you're running something like Windows XP on all the PCs in the world. Okay, for the people, including the people with an M3 processor, that will be pleasing fast, right? But it will be a horrible user experience most probably. And this one is giving you more accessibility. However, it's a poor user experience. So this is not acceptable. This also is not acceptable. So what about we need to avoid making the user having this feeling that it's either a bad user experience and you can't just work with high end device that they bought it with, you know, tons of money. But at the same time, and most important is that people where they are browsing with weak devices. You need to make it accessible for them as well. So build it for everyone and make everyone happy. This is exactly what we need to talk about today. It's making adaptive implementation for everyone. So it gives you the best user experience and at the same time, best accessibility to everyone. This is well known in the world with, you know, progressive enhancements or graceful degradation. Both of them are more or less doing the same thing. We have not one experience that we can ship it everywhere. So instead of having a glory user experience, which is including us as well, the best developer experience, we ship it everywhere. Maybe we are now falling behind for the people with weak devices or bad internet connection and so on. And if we start from the bottom, which for MVP or something, people with or looking for a great experience wouldn't be so happy. Both of them doing the same thing. But this opened the bar to have not one experience, but a bigger spectrum for a lot of experiences, depending on what the user have and their condition.

4. Factors Beyond Our Control and Responsive Images

Short description:

User-related factors beyond our control include screen sizes, browsers, devices, internet connection, and sitting position. When building reusable components, it is important to make them responsive. Images often contribute to a significant portion of a web page's total size, so it is crucial to optimize their delivery based on the device and container size. This can be achieved through responsive images and the use of source set and browser hints.

So let's jump in for the things. What user related factors are beyond our control? As I said earlier, there's some stuff, screen sizes, browsers, devices, internet connection, sitting position. Unfortunately, this one is not covered, but these four, we're going to get some tips and going on. And every time you see something that you are applying it to your application, just put yourself a point and we will count in the end.

Starting from screen sizes, I'm not going to discuss here responsive design. This has been there for like a decade now, but I'm interested today in talking about reusable components. If you are building a reasonable components, and usually you do, you need to build them to be responsive like water. So you don't know where this component is going to be reused and in which screen size, not even screen size, in which container now that you are building. Some people would argue not water and just responsive like cats. Cats are liquid. That's interesting separated by the way. Just check it. And I'm interested more not only about responsive components. I'm interested in images because images are 60, 70% of the web page total size. So if you have a component, which is big component, including some images inside it, there is again, there is no one size fits all. We need to do something. It is a regular way of creating HTML image tag. You have an image tag, give the source and this source is being shipped everywhere. I assume you have three sizes.

So the big size you're serving this image and it's like 500 kilobyte. It's a decent quality that you're serving. It's okay. However, if you serve it for a tablet, in this case, you are kind of shipping some bytes that the user might not notice the difference for that. And it will be worse as well if you're using something like mobile. It's a very small device and you're sending a relatively big image for that. That's why we have something called a responsive image that has been there for a while. And that way you are providing something called source set and you're giving some browser hints that you're telling the browser when to load which size. And with that, you can give, for example, this beside each image. There is some width.

5. Optimizing Image Components for Different Devices

Short description:

You can optimize image components for different devices using width descriptors and source sets. It is recommended to use the image component provided by your framework, as it is often optimized out of the box. Next.js, Astro, and Gatsby have built-in support for responsive images. If your framework does not provide such a component, you can use libraries like Sharp or ImageMagick, or online services like Cloudinary or Vercel.

There is some width. This is called image candidate or width descriptor. Yet you can tell the browser in this width or beyond. You can use this image in a steep and there are sizes as well down there.

The good thing about that is you ship that for desktop. You're shipping still the 500. It's OK. But you can provide a better experience for the people's tablet and even better experience for people with mobile. The good thing that this is fully supported, you can use it today.

Why I bring that to a React conference? Because I would like to give you a high recommendation to use something like the image component provided by the framework that you're using. This is happening because there is a collaboration between a team in Chrome called Aurora, which helps all frameworks optimize image components out of the box. For example, with Next.js, you only need to provide a source and maybe a width and height, and it will be translated to exactly what you need to optimize. So why don't you do that? Pixel Distributor does something similar by detecting the size on the display and shipping the right size based on that. This can also be achieved with the image component from Astro or Gatsby. If you're using something that does not provide a component, you can easily do it yourself. The latest version of Next.js uses a library called Sharp, which was previously using Squoosh. Sharp can be used for dynamic images that are uploaded by users and served everywhere. Alternatively, you can use ImageMagick or online services like Cloudinary or Vercel.

6. Optimizing Image Components and Browser Support

Short description:

You can optimize image components for different devices using the image component from Next.js, Astro, or Gatsby. If your framework does not provide such a component, you can use libraries like Sharp or ImageMagick, or online services like Cloudinary or Vercel.

Not only Next.js. You can have the same thing with the image component from Astro or Gatsby and other places.

What if I'm using something that is not providing some component? It's easy. Do it yourself. And even the latest version from Next.js is using now some library like Sharp. Used to use something called Squoosh. Who's familiar with Squoosh here? Awesome. Some people. So Sharp now is being used even for Next.js under the hood. You can do that especially if you're building something that has some dynamic images that the user is uploading some image and you're serving everywhere. You cannot control this one. But you can make that use Sharp or ImageMagick or use something online which is out of the box like Cloudinary or Vercel. And this provide you with some CDNs as well. So you can serve it everywhere.

7. Understanding Browser Support and Engines

Short description:

Let's discuss browser support and why there are so many different browsers. The top five browsers are Chrome, Safari, Edge, Firefox, and Opera. Each browser has its own rendering engine, such as Blink for Chrome, Gecko for Firefox, and WebKit for Safari. Apple only allows WebKit on iPhones. Additionally, the JavaScript engines used by the top three browsers based on Chromium are V8, which is also used by Node.js. Firefox uses Spider Monkey.

Cool, let's move on to another thing that we cannot control. The browser support. Based on the latest statistics, it shows that we have like 20-25 different browsers in the world. Active browsers right now. But the top five over here. Chrome, of course, is the best and Safari, Edge, Firefox, Opera and other browsers. And based on that, we're going to talk about the top five for a reason I'm going to share in a bit.

But why we have all these number of browsers in the first place? This is something that might come to your mind. And what could feature to be unsupported by a specific browser? That's because two important things. Most probably you know it. Two engines. Actually, the first one is called Rendering Engine. And this is provided by the browser in order to make the flow of your HTML, CSS, rendering the UI to you. And for these five, the top five that we have. We have these three, for example, are Chromium based and they are using something called Blink Rendering Engine. This is a Chromium based. And Firefox is using something called Gecko. And Safari is using WebKit. Funny enough, if you have an iPhone and you're running any browser in your iPhone, they are using WebKit. Why? Because Apple doesn't permit any other engine to be on their phone. There is some case in Europe, in EU now to change that. But till now, that explains when you are using Chrome, for example, on an iPhone and you find something that is not supported yet, that's because WebKit, not because Chrome.

The other thing that has a differentiation is JavaScript engines. Again, the three of them are based on Chromium. They are using something called V8. And this JavaScript engine is being used as well by Node.js. I believe everyone knows. And Firefox is using something called Spider Monkey.

8. Determining Browser Support and Using Future APIs

Short description:

Every browser has its own rendering engine and implements standards at its own pace. To determine if a feature is unsupported, we can use tools like 'Can I use?' and MDN. Google's Baseline provides icons to indicate support by the top five browsers. This enables us to use future APIs, like a one-line API for sharing content. However, Firefox is falling behind in support. To handle this, we can wrap the API with an if-else statement and use a third-party library as a fallback. Using the API offers a performance boost, but Firefox users may experience slower loading times. Another topic to discuss is images.

I believe everyone knows. And Firefox is using something called Spider Monkey. And Safari is using Nitro. So every engine. We have one standard, but every engine is implementing these standards in their own time and on their their own pace.

So how can we determine if a feature is not supported? Super easy question. We used to check something called, Can I use? Right. And we check MDN as well. Gives you an idea. And recently Google introduced something called Baseline. And based on that, you just get one icon, means that the top five browsers are supporting this feature. So it is OK to use it. Of course, this doesn't guarantee all the browsers in the world. But there's a big chance that this will be supported by most of your visitors.

Well, what does enable us to do? It's enabling us to use some APIs from the future. It is a quick and very small one line API that you can use today to share content from your website. So only one button. Click on it and will automatically open the sharing in your device, either a desktop or a mobile. Only one line. Super easy, right? However, it's not fully supported. Firefox is falling behind. So how can I do that? Just wrap it, wrap it around if else and say, all right, if it's supported, use that else, use a third party library as we used to do that before this API.

So how can we benefit from that? What's important? The importance here is that for the browsers that are supporting this API, you're not going to ship any third party. It's a performance boost from out of the box. You can just use the API shipped already on the device or in the browser of the user. But for the people who are still using Firefox, sorry, we have to share or send some more bytes. And that's a penalty that you have to pay. Cool. Another thing, images again. Yes.

9. Optimizing Image Formats and Browser Support

Short description:

Images are a crucial aspect in web development. By using the picture element, you can provide different image extensions with varying compression techniques. Although GXL is not supported by Bizline, it can be used as a fallback. Browsers check image sources and use the first one that works. WebP and GXL offer significantly reduced file sizes compared to JPEG, resulting in improved performance.

Images are a crucial aspect to consider in web development. One approach is to use JPEG images everywhere, which can work fine. However, there are other options available. By using the picture element, you can provide different image extensions with varying compression techniques, such as WebP or GXL. GXL is a new extension that offers lossless compression at a 50 to 1 ratio, which competes favorably with WebP. Unfortunately, Bizline does not currently support GXL, although it is supported by Safari. Nonetheless, you can still use it as a fallback. Browsers check the first image source and if it works, it is used. If not, the next source is checked. In the example shown, the fallback is a JPEG image, ensuring no loss of functionality. The other three extensions mentioned are fully supported by Bizline. Even WebP was not supported by Edge until recently, but now Edge utilizes Blink and supports WebP. From a visual perspective, the difference between the three formats may not be noticeable to the naked eye. However, the main advantage of using WebP or GXL is their significantly reduced file size compared to JPEG, resulting in improved performance and faster loading times.

10. Optimizing Image Formats and Bundler Compatibility

Short description:

I pick it up. If not, skip it. And as I showed you here, we fall back to the JPEG. The other three extensions are fully supported. Bizline now supports Aviv. The size of the image between JPEG and Aviv is less than half the size of JPEG. Provide different versions of the image. If you have a lot of images in your website, that will make a difference. If you are using any of these bundlers, like Bebel, Outperfixer, Webpack, NixJS, etc., they depend on browser list configuration. You give hints to the bundlers what to provide and what to polyfill. In the past, we used polyfills and corejs.

I pick it up. If not, skip it. The next one, next one. And as I showed you here, we also fall back to the JPEG. So you're not losing anything. Just providing a better experience, a progressive enhancement for the users who are using something Safari or something supporting. And the other side, the three other extensions are being fully supported. Bizline right now. Even Aviv was not supported until early this year. Edge was falling behind. Now, Edge is using Blink as well. So Aviv is being supported.

I would like to ask you, would you notice anything with your bare eyes between the three of them? I will answer this one for the time. You will not, because it's pretty nice compression with Aviv. But what you can notice is that the size of the image between JPEG and Aviv is less than half the size of JPEG. So it works to be supported. You need to provide different versions of the image instead of only JPEG. And that combined, if you have a lot of images in your website, that will make a difference.

Moving on to a quick other tip that most people are using. If you are using any of these bundlers, I believe you do. Bebel, Outperfixer, Webpack, NixJS and others. They are depending pretty much on some configuration like browser list. Browser list can be provided in two ways. Either to have it in your package.json that way, or you have it in a different file. Browser list RC, I guess. And that way you are giving hints, but this time not to the browser. You're giving hint to the bundlers what to provide and what to polyfill. In the past, we used to have some polyfills and corejs. I'm not sure if anyone is familiar with that. Awesome, we have some, you know, old people here.

11. Controlling Bundler and Device Compatibility

Short description:

So the hints here are important for bundlers to only ship necessary features. We don't provide polyfills for supported features. We don't support IE or Opera Mini. Don't ship unsupported features. If used by 0.25% of the population, support it. Bundlers will only ship what's needed. Device memory API provides memory information. Serve lighter versions based on device strength.

So I mean, old in career, nothing else. So the hints here is very important, so the bundler are not going to ship anything that is not needed. So, for example, the second line here is not dead. So we make sure that we are not giving any polyfill for any feature that is supported in IE, for example. IE is dead, right? Opera Mini is dead. So we don't support it. So no shipping for any polyfills. Last two versions. So anything is unsupported for three versions ago from any browser? No, we don't support it. So don't sell or don't, you know, send these features. If it's used by 0.25% of the population of the Internet, which is a lot, by the way, then support it or you can give a specific version. And that way out of the box with nothing, only small configuration. These bundlers will pick it up and only ship the things that needed to be shipped.

Cool, let's move on for another thing, device. That's another thing that we cannot control for the user. And we used to have this one. People think that Chrome is eating their rams. Right. But it's not true because last year Chrome make an update that developers think it's a bug, but it's a feature. That when you hover over any tab, you can get an idea how how how much does it take from your memory? So it's not Chrome, it's you developers are doing that. Developers start. Oh, we used to blame Chrome. Now we have to do it. So I'm introducing to you one experimental API with API that you can use today. It's called window navigator device memory. This is giving you approximate number of memory in gigabyte that is provided in the device of the user. Based on that, you can get an idea how strong or weak this device and you can serve maybe a lighter version. As an example, I built this one is just building a context API, check only once for how strong or weak this device and send it on the context. And in any component in your tree that you feel that it is making a lot of computation, maybe loading a big image or video or something, you can just make this small check device strengths. Is it weak? All right.

12. Optimizing User Experience and Device Performance

Short description:

Just remove unnecessary elements for a better user experience. Use Chromium-based devices for better performance. Use the hardware concurrency API to detect the number of cores and optimize accordingly. Progressive enhancement or graceful degradation can be implemented based on the device's capabilities.

Just remove some stuff that might be making a very bad experience for the user. Otherwise, just serve it. That's weak one as well that you can use. This one is being provided only in Chromium based devices browsers. That's why it's costing nothing just for the people who are using Chromium based browsers. They can experience better experience for the other people. Just do the same thing that you used to do as a default.

Another one is called hardware concurrency, and this is giving you as well the number of the concurrent cores that the user is running on their device. Based on that, you can get again some idea about how fast or how slow the device and maybe make something else. In the morning that was great talk in the same place I was there, that talk by Dara. It was talking about the concurrency. I am interested in this part. I was so happy that she mentioned this part is talking about. You know, this may add CPU strain on low-end devices. How can you do that? How can you detect that with this API?

You can detect whether this device can or cannot do this concurrency, and maybe you can build it's a lot of work, but you can build two versions from this component and conditionally do it because you cannot or you can't do it without it. And conditionally, do it because you cannot of course conditionally render Hawks. So quick could snip it as well. You can just detect. Oh sorry, you can just detect how many cores and based on that, it's more than four cores. For example, you can load more things, run more web workers, load the 3D model or something. I have experience in the past that when you visit some, I was visiting some website that is showing some 3D models, and when it happens. I didn't have M1 back then. It wasn't until anyone can expect what happened. I just feel that my laptop is taking off. The fans became too loud because of that. They are using a lot of CPU, a lot of memory and doing a lot of things without detecting whether my laptop can do that or not. So you can do that now and give a better experience for the people. This you cannot think about it from the beginning, but it's a progressive enhancement again. Or graceful degradation. If you're serving something very heavy later on, you can just degrade it.

13. Optimizing CPU Usage and Internet Connection

Short description:

Fully support heavy CPU usage today. Consider internet connection limitations for 2G and 3G users. Progressive enhancement examples: YouTube's quality adjustment and using a Form component for SSR.

The good news is that this is fully supported. So do it today. You can just think about it if you are building something which is very heavy CPU wise. Just do it today and I think you will be happier.

Last part, I will be so quick for this one. Sorry, it's taking too long time. Internet connection. You cannot detect or no, you can detect, but you cannot expect the user. Joining from which Internet connection. And here's a forecast saying that in by 2030 we are going to have 5G taking over. However, what I noticed as a performance guy, the people on the bottom 2G, 3G combined would be like 9% of the population of the Internet. That's a lot. Still in 2030. However, we're still in 2024. So people who are experiencing 2G and 3G are more than combined, are more than 25% of the population of the Internet. That's quite a lot. That's not accessible for them.

So here's an example of this thing that how can we progressively enhance it? YouTube, by default, if you have a bad Internet connection, you will have the quality is being dropped automatically. Anyone experienced that? Yes, exactly. Why they are doing that? Because it's better for the user to watch a bad quality video than waiting for a long time to have a good quality video. This exactly what we need to do as well.

Another example is if you are running SSR, actually, you are sending HTML, pure HTML, and then you do JavaScript to rehydrate it. So what would happen if you are loading a form as an HTML and that form? Maybe it's one line or one input and submit and you fill it real quick and keep submitting. But the JavaScript was not loaded yet. Actually, this button will do nothing until JavaScript is loaded and then it can respond to you. My recommendation is to use something like the Form component coming out of the box was Remix. Not only Remix, comes as well with Reboot and Next.js and the list version. This is important because what is happening with the form component is that it is falling back to the HTML form element, which means that it will be a different experience a little bit if the user starts submitting when the HTML just show up, it will be posting back. Anyone who's experienced with PHP? It's only me? Oh, it's on Bevo. Nice.

14. JavaScript Form Submission and Effective Type API

Short description:

No JavaScript form submission will reload the page, accessible for users with bad connection. Experimental effective type API determines network strength and allows different experiences based on internet connection. Consider the amount of bytes shipped to user. Chromium-based browsers support effective type, while Firefox had issues with its implementation. Proposal to respond with save data instead of effective type.

So what is happening in this one is that you have no JavaScript. You're just submitting, it will post back, the whole page is going to reload. Not the best experience, but still accessible for the people with bad connection, because the JavaScript will take a bit sometime to load.

Next, there's some experimental API, and it's controversial and I'm going to explain why. This one is giving you some idea about how strong the internet at the user device using something called effective type. And effective type, if you run it in your browser nowadays, it will just give you 4G, 3G, 2G, usually it's 4G, even on Wi-Fi. And you can just get it and give different experience. For example, you can just load heavy assets, high quality images or videos for 4G, otherwise give low quality images, right?

Why this might be a big thing? Because if the user has already bad internet connection or bad network, you will serve only one megabyte of video, for example. A degraded video in 4G, but if it's 2G or 3G, you can serve only an image, which is 30 kilobyte. Just think always about the amount of bytes that you're going to ship for the user.

Unfortunately, this is supported only by Chromium based. And you will find the funny thing about this picture, that Firefox supported and didn't remote support. Right. That was very interesting. They support it and what happens is that Firefox for mobile had a big bug happening because of this enable. And then they started after supporting it, they remove it again and say, no, this is bad for security. Now we have footprints for the user that we can detect as well where they are and whether if by detecting their internet, if it's 2G, this means that maybe they are walking. Becomes a 5G. Now he's in a place with Wi-Fi. Maybe he's in a work. Maybe he is at home. Right. And moving again to 4G. This means, sorry 3G. This means that he's walking in the street. So it's a it have some footprint that might be bad for privacy. That's why they work together with the Chrome team in order to introduce a proposal. This unofficial, this not done yet. And instead of responding with effective type and, you know, being disabled and so on, save data.

15. Improving Internet Connection and Summary

Short description:

Instead of using effective type, the proposal is to respond with save data. Detecting limited internet connections can provide different experiences. Using React with the adaptive hoax provided by the Chrome team is recommended. In summary, we discussed screen sizes, browser support, device CPU, and internet connection. Please be safe and follow me on Twitter for more resources.

And instead of responding with effective type and, you know, being disabled and so on, save data. We have this three metered, sustained speed and only change this proposal. It's not out there. For two years, they are discussing what they are going so slow. They are taking their time. What will be in meter is that they can detect whether you are in an internet connection, yeah, whether you are in an internet connection, which is very limited, especially if you are in a hotel that is giving you, you know, just a metered amount of data. Maybe here in the conference. I'm not sure. Maybe in the airport, they're giving you a limited amount of data as well. They can detect it and based on that, you can give different experience.

If you are using React, is it a legitimate question? Who is not using React here? Okay, just oh, there's some people. This was sanity check that I'm in the right place. Okay, you can just do that. Use this adaptive hoax, which is provided by the Chrome team long time ago. Again, this might have changed based on the change of the API itself. But you can use it today because this is fully supported now by the Chromium based device process. Just try it out. It will give you everything that we talked about out of the box with some custom hoax that might be useful and clean.

So as a quick recap, we talked about screen sizes and how can we provide different versions of images sizes. Talked also about browser support and how can we use APIs from the future without affecting anything and make it easy in the future to remove and clean up that without shipping any unnecessary bytes. Talked about device CPU, how can we detect e-memory and CPU as trends and business that give different experience. And finally, internet connection, only one thing that we didn't talk about, no, won't talk about, which is sitting position. But yeah, I will leave that for the visual service. Please be safe. There's quite a lot of resources. I don't have to picture that. If you like it, I'm going to share that later on on my Twitter. Come follow me. So yeah, that was it. Thank you very much.

16. Detecting Slow Cores and Aurora Image Optimization

Short description:

To check for slow cores on low-end Android phones, you need to find patterns in user experience. Some models have already been detected as slow by the Chrome team. The Aurora project does not handle WebP, A fifth, or Jxcel generation. That's all the time we have. Thank you.

One question from anonymous. Several low-end Android phones actually have several gigabytes of RAM and several CPU cores. But all the cores are very slow. Anyway, to check for that. Oh, yeah, that's pretty hard. I mean, without an API that is giving you this idea out of the box, it will be trial and error. You have to find out a pattern, how the user is experiencing that. Maybe a specific device mark or something model. And there's some models already that has been detected by the Chrome team that are pretty slow. I think Motorola G2 something is already detected as a middle-range device. So if it has good hardware and good memory and is still slow, I think it's something else. You have to detect maybe opt your users to some Chrome user experience report or something, RAM field data, and based on that, all the data combined, you can detect which one is the culprit.

All right. Then one small question. I just want the yes or no answer. The Aurora project that you mentioned, does it also do WebP and A fifth and the Jxcel generation or do you need to provide that yourself? So it come again. Yeah. You mentioned the Aurora project. Yeah. Image optimization. Does it only do sizes or also different image formats? Yes or no answer. Yes or no answer. No, everything. Everything. No, yes or no. No.

Okay. That's all the time we have. Thanks a lot. Thank you.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
How React Compiler Performs on Real Code
React Advanced 2024React Advanced 2024
31 min
How React Compiler Performs on Real Code
Top Content
I'm Nadia, a developer experienced in performance, re-renders, and React. The React team released the React compiler, which eliminates the need for memoization. The compiler optimizes code by automatically memoizing components, props, and hook dependencies. It shows promise in managing changing references and improving performance. Real app testing and synthetic examples have been used to evaluate its effectiveness. The impact on initial load performance is minimal, but further investigation is needed for interactions performance. The React query library simplifies data fetching and caching. The compiler has limitations and may not catch every re-render, especially with external libraries. Enabling the compiler can improve performance but manual memorization is still necessary for optimal results. There are risks of overreliance and messy code, but the compiler can be used file by file or folder by folder with thorough testing. Practice makes incredible cats. Thank you, Nadia!
Optimizing HTML5 Games: 10 Years of Learnings
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Optimizing HTML5 Games: 10 Years of Learnings
Top Content
Watch video: Optimizing HTML5 Games: 10 Years of Learnings
PlayCanvas is an open-source game engine used by game developers worldwide. Optimization is crucial for HTML5 games, focusing on load times and frame rate. Texture and mesh optimization can significantly reduce download sizes. GLTF and GLB formats offer smaller file sizes and faster parsing times. Compressing game resources and using efficient file formats can improve load times. Framerate optimization and resolution scaling are important for better performance. Managing draw calls and using batching techniques can optimize performance. Browser DevTools, such as Chrome and Firefox, are useful for debugging and profiling. Detecting device performance and optimizing based on specific devices can improve game performance. Apple is making progress with WebGPU implementation. HTML5 games can be shipped to the App Store using Cordova.
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
Workshop
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
Building a Shopify App with React & Node
React Summit Remote Edition 2021React Summit Remote Edition 2021
87 min
Building a Shopify App with React & Node
Top Content
Workshop
Jennifer Gray
Hanna Chen
2 authors
Shopify merchants have a diverse set of needs, and developers have a unique opportunity to meet those needs building apps. Building an app can be tough work but Shopify has created a set of tools and resources to help you build out a seamless app experience as quickly as possible. Get hands on experience building an embedded Shopify app using the Shopify App CLI, Polaris and Shopify App Bridge.We’ll show you how to create an app that accesses information from a development store and can run in your local environment.
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Build a chat room with Appwrite and React
JSNation 2022JSNation 2022
41 min
Build a chat room with Appwrite and React
Workshop
Wess Cope
Wess Cope
API's/Backends are difficult and we need websockets. You will be using VS Code as your editor, Parcel.js, Chakra-ui, React, React Icons, and Appwrite. By the end of this workshop, you will have the knowledge to build a real-time app using Appwrite and zero API development. Follow along and you'll have an awesome chat app to show off!
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.