Power Fixing React Performance Woes

Next.js and other wrapping React frameworks provide great power in building larger applications. But with great power comes great performance responsibility - and if you don’t pay attention, it’s easy to add multiple seconds of loading penalty on all of your pages. Eek! Let’s walk through a case study of how a few hours of performance debugging improved both load and parse times for the Centered app by several hundred percent each. We’ll learn not just why those performance problems happen, but how to diagnose and fix them. Hooray, performance! ⚡️

Rate this content
Bookmark
Watch video on a separate page
Video Summary and Transcription
The video delves into enhancing React performance by implementing lazy loading, which is particularly effective in reducing initial load times by only rendering necessary content initially, such as images and iframes. Lazy loading improves web performance significantly, especially on pages with numerous elements. The talk emphasizes the importance of using tools like Webpack Bundle Analyzer to identify and remove large base64 images and unused code, which can lead to significant performance gains. Tree shaking is another key strategy discussed, as it helps eliminate unused code from JavaScript bundles, improving load times. The video also highlights the benefits of using Knip, a tool that finds and removes dead code, thereby enhancing build times and code readability. Properly configuring ESLint rules can prevent performance issues by enforcing coding standards. The speaker also shares insights on optimizing barrel exports, which, if not handled correctly, can lead to larger bundle sizes. By focusing on these strategies, developers can achieve better performance scores, as demonstrated by improvements in metrics like Largest Contentful Paint and total blocking time.

This talk has been presented at React Advanced 2023, check out the latest edition of this React Conference.

FAQ

Using large base64 encoded images significantly increased the page size and load time, as these images were embedded directly into the JavaScript bundles. Removing these images from the codebase reduced the total page size from 11 megabytes to less than 7.5 megabytes, improving the Largest Contentful Paint metric.

An ESLint rule can help maintain code performance by automatically enforcing coding standards and practices that prevent performance issues, such as improperly used imports that could lead to larger bundle sizes. It's a proactive measure to ensure that future code modifications adhere to performance best practices.

Knip is a tool used to detect and remove unused code from a project. By eliminating dead code, it helps in reducing build times, improving readability of the source code, and potentially decreasing the size of the deployed application, thereby enhancing overall performance.

Tree shaking is a process that removes unused code from the final build of a web application. It helps in reducing the size of JavaScript bundles, thus improving load times and overall performance. Properly configuring tree shaking can prevent large, unnecessary files from being loaded.

Modern frameworks like SvelteKit, Nuxt, NextJS, Remix, and Nastro are designed to make good performance choices by default, which makes it difficult to create slow web pages, although not impossible.

Lazy loading is a strategy where only a necessary subset of content, such as images or iframes, is loaded initially, with more loading as needed. This reduces the initial load time and resources used, significantly improving performance, particularly on pages with many elements like iframes.

Web performance is crucial for retaining users as they are more likely to stay on faster sites, improving conversions which is beneficial for financial gains, and enhancing user experience as users generally dislike slow web pages. It also addresses accessibility by making websites usable for people with limited hardware or bandwidth.

1. Introduction to React Performance#

Short description:

Hello, and welcome to Power Fixing React Performance Woos. Web performance is awesome. Modern frameworks like SvelteKit and Nuxt and Next and Remix and Nastro make good choices for performance. I'm going to walk you through a series of five improvements I made to the popular center.app website. The first improvement is addressing 81-iframe embeds.

Hello, and welcome to Power Fixing React Performance Woos, with me, Josh Goldberg. I'm an open source maintainer. I work in the TypeScript ecosystem and I wrote a book, Learning TypeScript published by O'Reilly, but we're not here to talk about all that.

We're here to talk about web performance, power fixing things. Web performance is awesome. If you're not convinced, highly recommend web.dev. Why speed matters? Summarizing its points, speed is important for retaining your users, they're more likely to stay. Improving conversions, that's good for the money. It's not good for your user experience because people don't like slow web pages, fun fact. It's an accessibility point because people with limited hardware and or bandwidth often can't use or have trouble using really bloated old slow web pages. Do not want.

Modern frameworks like SvelteKit and Nuxt and Next and Remix and Nastro and all these do make a lot of good choices for you. So if you're using something like, say, NextJS, which we'll see later, it oftentimes is set up to make good performance the built-in, the default, which actually makes it harder to write slow web pages. But not impossible. They don't prevent you from introducing performance aggressions. Even if you're doing everything right, it's still possible over time for things to creep in. I'm going to walk you through a series of five improvements I made, only some of which actually touch React code to the popular center.app website.

Now, this is from a perfectly good, respectable team. They did nothing wrong, except they just didn't have the time to focus on performance, which meant then that some performance problems did creep into the app, which I was able to help with. Normally, when I tackle a performance issue, it's in four phases. Identification, seeing what's wrong, ideally with something I can measure. Investigation, looking into what the root cause is. Implementation, ideally of a fix. And confirmation that the fix actually fixed the thing that we wanted it to.

The first of these is a real quick one, 81-iframe embeds. I've seen this very rarely, so it was really cool to come up here. When you look at the center.apps slash quotes page, prior to the fixes, it would take forever. Look at how slow this was. And the root cause was, we'll see soon, that it had a lot of iframes. But the effect, the symptom, was that it took forever and felt slow.

2. Identifying the Issue with Iframes#

Short description:

And I had a clue because I had seen a lot of tweets show up on a page and take a while before. So just looking through the dev tools, we see a recording here of me confirming that, yes, it is what I suspected that there are a lot of iframes on this page. And fun fact about iframes. We can see here that there are quite a few of them.

And I had a clue because I had seen a lot of tweets show up on a page and take a while before. So just looking through the dev tools, we see a recording here of me confirming that, yes, it is what I suspected that there are a lot of iframes on this page. And fun fact about iframes. We can see here that there are quite a few of them. Each iframe is like a page within a page. So when you have 84 of them or 81 of them, that's quite a few pages. When this recording was taken yesterday, it was actually more iframes than when I'd initially done the investigation. It was a total of 94. So that's quite the slowdown. And they're all showing up at the same time, meaning they're all loading at the same time, which is why the page froze up and took a while to load. Boom.

3. Fixing Iframe Rendering with Lazy Loading#

Short description:

In implementing a fix for rendering multiple iframes, lazy loading was used. Only a subset of the iframes is initially rendered, and more are loaded as the user interacts with the page. Lazy loading improved performance by reducing the initial load time. This approach is not specific to React and can be applied to other web development projects. It's important to optimize apps for performance, even if they were initially well-crafted. Lazy loading is a recommended strategy for rendering large amounts of content, especially when only a portion of it is initially visible to users.

In implementing a fix, I first found where the iframes are rendered, which is this general-use cards component. It's simplified here, but in essence, it loads in card data using a hook, and then for each piece of that data, stored in an array, it would map into this card component, rendering as a child component. And that card component calls to React Twitter Embed, which is a perfectly good popular NPM package that embeds a tweet as an iframe.

That's the standard way to use Twitter's external tweet embedding features, especially since they went all private only or cost-only for their APIs. So, large numbers, dozens, almost 100 iframes all rendering at once. The strategy that I would often take in a situation like this is lazy loading. This is a simplification of the fix we implemented with lazy loading. First, we dot-slice the cards, so that only card 0 through, in this case starting at 6, render at once. Then, every time a card loads, on load, we increment or add a little bit to an extra counter, saying we can additionally load this many more cards. So, after the first few cards load we can keep loading more and more.

Now, 6 is an arbitrary number, but it worked well here because that's roughly the maximum number of iframes anyone would see upon first loading the page. In theory we could have based it on the page's viewport or some such, but I didn't have the time, I was just doing this for fun. And, just confirming, much faster for recording. It's still loading the same number of iframes, it's just waiting to load. It's being lazy in loading all but the first 6. So, yay, that felt nice. And as we'll see in the remaining four investigations, not much React-specific stuff here. But, it is general good web principles. So, a few takeaways. One, unoptimized apps are, in my experience, the most fun to investigate because they might be totally well-crafted, they just haven't had the time to do those low-hanging fruit, those much more straightforward wins for performance. Two, this code was probably totally fine when it was first written. I imagine when the page was first implemented it probably only had 6 or 12 quotes at most, not ideal but not anywhere near nearly 100 iframes. And lastly, lazy loading is awesome, highly recommended as a strategy. If you have a whole bunch of stuff you want to show and only some of it is initially visible to the users, maybe wait to render the rest of it until a second or two. Let's move on.

Hidden embedded images. This was fun. So, I did run a performance score, which is the standard DevTools, hey, how's the performance? within the general Lighthouse family of checks for a page. And it came with a 36 score, which is not ideal, it's in the red. And going down the suggested opportunities for growth, which I'd highly recommend looking into if you ever get a performance score in the red or yellow, the one that first stood out to me was, total size 26 and a half thousand kib, or roughly two dozen megabytes.".

4. Analyzing the Bundles and Identifying the Issue#

Short description:

Wow, that's a lot of code being loaded by the page. I used the Webpack Bundle Analyzer to analyze the bundles and chunks of JavaScript in the app. This helped me identify the issue with the Gcal features illustration.js file, which was the biggest part of any chunk by an order of magnitude. It contained base64-encoded images and unused code.

Wow, that's a lot of code, that's a lot of stuff being sent over the wire. Why was so much stuff being loaded? Why was so much stuff being loaded by the page? Well, I popped open this great tool called the Webpack Bundle Analyzer. Because Centered App is written in Next.js, we could use the really nice straightforward Next.js slash Bundle Analyzer integration, which made it relatively straightforward to plop open the require Next slash Bundle Analyzer and run it if process and analyze is true. In other words, I followed the docs and then ran this command, MPM run build. This created a local production build of the app analyzing the bundles or the generated chunks of JavaScript.

5. Analyzing the Issue with Gcal Features#

Short description:

And my favorite part of the tool is it comes with a nice visualization. Gcal features illustration.js was the biggest thing on the page, turning the app chunk into a nine megabyte monstrosity. The file contained base64-encoded images, which is not ideal for performance. I deleted the file, resulting in a much improved page chunk and a decrease in Largest Contentful Paint from 17.6 seconds to 13.2 seconds.

And my favorite part of the tool is it comes with a nice visualization. And this visualization showed Gcal features illustration.js being by far the biggest thing on the page, the biggest part of any chunk by an order of magnitude, multiple megabytes. It turned the biggest, most important chunk, the app chunk, into a nine megabyte monstrosity. Huge. I'd never seen anything this big from something checked into the repository. I love it.

This is really cool to me. So I looked at the file and saw that it had a whole bunch of images embedded in as base64. Now, base64 is a way of encoding an image or some piece of data as a string. And it's totally reasonable to use it for small images. But if you have one that encodes to millions of characters, if it's a multiple megabyte image, base64 encoding it into your SVGs inside your React components is not generally a good idea for performance. It might have been a nice, quick way to prototype a feature. But this is not great for production because it requires the user downloading and loading megabytes upon megabytes of JavaScript with this base64 encoding to run your page. Not great.

Furthermore, this code was unused. Nowhere in the app actually rendered GCL features illustration. So I just deleted the file. No problems. Rerunning the npm run build with analyze true, we saw a much improved page chunk. Now I'll go into more improvements on this later. But for now I was pretty pleased with this. Going down from 11 to less than 7.5, that's a pretty good improvement in the total size. Yay. And just confirming, I reran Lighthouse dev tools and saw that, well, Largest Contentful Paint improved from 17.6 seconds, give or take across a few runs, to 13.2. But interestingly, the overall performance score didn't actually improve. And I believe this is because the general performance score is a factor of issues including LCP, and LCP can only weigh in at so much. So beyond a certain point, LCP is just as bad as it can be. Later on, we will see it improve, I promise. But yeah, still four and a half seconds, give or take, improved of Largest Contentful Paint or how long it takes to paint the biggest visual thing on the pages, I think a nice user improvement. So bittersweet.

6. Takeaways from Performance Investigation#

Short description:

My takeaways here were one, it's still really fun to performance investigate unoptimized apps. Two, regularly audit your webpack analyzers and bundle analyzers. Three, some metrics may require multiple fixes before improvement. LCP was improved, but the overall performance score was not and that's okay.

My takeaways here were one, it's still really fun to performance investigate unoptimized apps. Get weird, wacky chunks like this. Two, similar to how you might regularly want to run all the pages in your site to see if they're working well, even if they're slow, regularly audit your, your webpack analyzers, your bundle analyzer, maybe see if there's some humorously large chunk or bundle somewhere in there. And three, some metrics will take multiple fixes before there's an improvement made. LCP was improved, but the overall performance score was not and that's okay. As long as user benefit is happening, I'm happy.

7. Issue with Barrel Exports and Tree Shaking#

Short description:

Three giant index.js files are a symptom of barrel exports and not being tree shaking. Barrel exports are a common pattern in JavaScript where an index file exports multiple files. The theory behind tree shaking is that it removes unused code from dependencies before the build. However, in this case, the unused parts of the barrel were not removed.

But okay, let's take another look at that bundle analyzer output. Three, giant index.js files. What's going on there? Now, this is a symptom of barrel exports and not being tree shaking, two terms we should go into. Barrel export is a common pattern in JavaScript when some file, like an index file, exports a whole bunch of other files. It's convenient so that whoever wants to import those other things, can just take it from the one place, that one barrel. And in theory, tree shaking, which is the process of removing unused code from your dependencies before they go into the build, should remove the parts of the barrel that are unused. In this case, it doesn't look like they are.

8. Improving Bundle Performance and ESLint Rule#

Short description:

And just to confirm, only 34 imports were found for the FortAwesome/ProLiteSVGIcons package. Importing directly from the individual files instead of the barrel export dramatically improved the bundle. The issue was not with next.js or barrel imports/exports but with the tooling at the time. An ESLint rule was written to prevent accidental usage of barrel exports. Performance improved with Contentful Paint LCP, total blocking time, and speed index. Performance is now in the average area.

And just to confirm this, I ran a search, how many times is this whatever FortAwesome slash ProLiteSVGIcons package imported from? Only 34 times. Now, I've actually used this package before. It's really nice. It's a quick, well-constructed collection of SVG icons of different weights. And they're all pretty fine-tuned for performance. None of them are huge. Which is why only 34 imports from it kind of raised my alarm bells of something funky, something fishy is happening here. 34 is a pretty low number.

So, I tried something out. I tried to, instead of importing from the barrel export, because I've seen barrel exports not get tree-shaken before. I tried importing directly from the files that contain the assets. Instead of importing, say, both the abacus and baby icons from the root, the barrel, I imported them from their individual files. And voila! Applying that fix across the 34 for light SVG icons imports dramatically improved the bundle. It reduced my number of giant index.js barrels from three to two. Which meant that that proof of concept showing what would happen if I no longer used the barrel export was, in fact, a significant improvement for the app! Now, I will note here, next.js 13.1, which was released after I did this investigation, did improve barrel import detection quite a bit for tree-shaking. And I believe later, subsequent versions of next.js did further work to improve the situation. So the issue here is not next.js anymore. The issue is certainly not barrel imports or barrel exports. The issue is just that the tooling at the time did not support this use case and has since been patched. But anyway, I wrote an ESLint rule because it's a good idea to write ESLint rules or general pieces of automation that prevent people from doing things you don't want them to do in the future. Although I'd fixed all these imports now, we wanted to make sure that someone wouldn't accidentally introduce a new usage of the barrel exports. This ESLint rule says that for any import declaration with a source value pulling from fordawesome anything as S3G icons without anything after it would get a context report telling you to use the individual path. You can see in the blog post on my blog that I'll link later that I also wrote a fixer to auto fix any imports which was really useful for applying automatically across the whole code base.

Rerunning performance. Yep, we saw finally an improvement from 36 to 51. We did improve Contentful Paint LCP from 13 to 12 give or take. We also significantly improved total blocking time which makes me think script parsing was an issue here and we improved the speed index. So I'm pretty pleased about this. At long last performance was no longer in the red. It was at least yellow, what they call average area.

9. Improving Performance and Removing Dead Code#

Short description:

13.2 was pretty, pretty slow. So I'm glad we improved it. Some takeaways: 1. Make sure your tooling supports tree shaking and barrel exports/imports. 2. Proof of concept larger fixes before investing too much time. 3. Automate good practices to save time and avoid manual enforcement. Unused code is detrimental to readability and build times. An awesome tool called Knip helps identify and remove dead code.

13.2 was pretty, pretty slow. So I'm glad we improved it. Some takeaways. One, make sure your tree shaking large dependencies, again barrel exports, barrel imports, totally fine. It's a very valid pattern in many cases. Just make sure your tooling supports them well. Two, it's a good idea if you're gonna make a larger fix such as writing a custom ESLint rule to proof of concept it. Make sure you're not going down spending too much time doing something that won't actually yield a lot of benefit. And three, love at automation. Whenever you can automatically enforce a good practice, do so that way you don't have to manually enforce it or clean up mistakes or bad uses of it later on.

Cool. Speaking of unused code, this one wasn't so much performance investigation as a general good practice. Let's say you have a function that's never called. It would be nice to have a tool that tells you this is dead code. You should delete it. Or let's say a type, an interface, that maybe previously was associated with a function but isn't used anymore. Or maybe even you have a dependency which used to be used maybe and no longer is. Be nice to have something that tells you this is dead. Please remove it. And unused code is bad. I want that tell me it's dead feature because unused code has two major drawbacks. For one, it makes your source files less readable. There's more stuff to parse through when you're trying to understand. And two, it causes often longer builds. At the very least, dependencies that are unused take up time in your installs your npmci or equivalents. And if you're doing some kind of linting and or building etc. on your source code, they take up time to be linted, build it and so on. And all that comes together to cause development to be slower to make your dev slow down, which is bad because you want your devs working as quickly and efficiently as possible. Fortunately, there's this awesome tool. Look at this ridiculous cow they made called Knip.

10. Using Knip to Find Unused Code#

Short description:

Knip is a tool that finds unused code in your project. It can be installed as a dev dependency and run with default settings. Configurations are available to analyze specific files. Although it may not find much in every project, it is still beneficial for developer enablement and preventing future issues. Ensure your developers are happy and consider adding useful tooling. Known preventative fixes are worthwhile, and Knip can uncover significant amounts of unused code. Remember, 'Knip it before you ship it!'

Knip does what I want. It finds unused code. So without getting too salesy on it, you can install it as a dev dependency optionally and then you can just npx knip and it'll run some defaults and find unused code for you. Now every project is different, so you can configure it. It's got some nice config settings. For example, this one takes a look at your project's index file as the entry point and then also analyzes all your project files that are source anything.ts. But we ran it and we didn't really find that much, but we did check it in as a CI step because not all performance fixes directly follow investigations with conclusions and Lighthouse scores. Sometimes you're just running Dev Enablement which is a good goal on its own. You want your devs to be great and future work avoided is still good and work avoided. So a few takeaways here, one, make sure your devs are happy. If there's tooling that you want to add in that would be useful, see if you can find time to do that. Two, known preventative fixes are definitely worthwhile. I have seen Kinect find megabytes upon megabytes and other code bases. So I knew that Knip would likely eventually hit this issue if we didn't add Knip. And three, as the Knip readme says, Knip it before you ship it, love it. But okay, back to the investigations.

11. Investigating Performance with Emojis#

Short description:

My favorite one of them all because it involves emojis and open source, this was a fun one. It still takes a few seconds to run, which is a little unusual. A full second, slightly more, is spent in the emoji plugin for Draft.js, the text editor. To Short appears to be making some rather fancy looking regular expressions. NS.toShort was taken almost 700 milliseconds. Creating a huge regular expression tends to be slow. We ended up creating a cache, where if there was a large amount of work being done, we stored it in a variable so that the work only needs to be done the first time the function is called. The pause was mostly resolved, and we went up from 51 to 65, almost 15 overall points better. The largest contentful paint dropped from 12 to seven. Total blocking time became green. Speed index improved. We sent this caching improvement to the Upstream dependency, the open source library.

My favorite one of them all because it involves emojis and open source, this was a fun one. Take a look at this recording. In the dev tools and the performance tab we reload and trace and measure. We see that even after the page has loaded the title, meaning scripts have loaded in, it still takes a few seconds to run, which is a little unusual for even local dev servers.

And if we look at the profile that was processed we can see that there are, while the page is running its scripts, a couple of seconds of white blank before we get to the homepage. And if we zoom in on there and just look, we see there's a long task, that's that red striping indicating too much is happening. And within that, a full second, slightly more, is spent in the emoji plugin for Draft.js, the text editor. And much of that is spent repeatedly in this NS.toShort function.

What's going on here? To Short appears to be making some rather fancy looking regular expressions. Now in the dev tools, if you click on where the function name is given a blue link, aha! You get taken to where that function is in source code in your dev tools, and look at this, it is annotated that NS.toShort was taken almost 700 milliseconds. That's time spent inside that function. 700 milliseconds just in this one function. That's a performance bottleneck if I've ever seen one.

So what's going on here? Now I actually had a really fun time investigating this with another open sourcer person, very nice guy named Marvin H. He's been writing a great series of blog posts called Speeding Up the Web. I've linked to them later on, would highly recommend. Marvin and I hopped on a Zoom call and looked at this NS.toShort. Here's a simplification of its implementation. In its essence, it takes in a string and runs a replaceAll utility on the string with a huge regular expression containing all sorts of emojis. Now creating a huge regular expression tends to be slow if you're dynamically creating it based off a lot of stuff, which this function was. Now again, I'm oversimplifying the investigation, read the blog post if you want more, but what we ended up doing in one or two places was creating a cache, where if there was a large amount of work being done, say, creating a huge regular expression, we stored it in a variable so that the work only needs to be done the first time the function is called. Ooh, great. Made me happy. And just confirming rerunning performance, the pause was mostly resolved, and wow, look at that. We went up from 51 to 65, almost 15 overall points better. The largest contentful paint dropped from 12 to seven. Total blocking time became green. Speed index improved. This was a happy change for me. So we actually sent this as an improvement, this caching to the Upstream dependency, the open source library.

12. Conclusion and Key Takeaways#

Short description:

Got merged a few months later. In the meantime, I used the npm-patch package to apply it locally so that I could get the changes before they emerged into the Upstream repo. Performance good. We found iframes and used lazy loading. We deleted unused files and fixed tree shaking with an ESLint rule. Knip prevented unused code. Cached the result of an expensive computation. Resources available online. Thank you very much y'all. Cheers.

Got merged a few months later. In the meantime, I used the npm-patch package to apply it locally so that I could get the changes before they emerged into the Upstream repo. Marvin and I were very pleased about this.

And that's all that I wanted to show with perf investigations. There's a lot more that we could dive into. We could dive into React's profiling, there's a great set of dev tools. We could go into React loops and hooks and all these things. But this talk is remote and for half the time that would take to go into those. So let's recap the stuff that we were able to go into.

Performance good. There are a lot of reasons why users should care and you should care about performance. We looked at quite a few different investigations. We found a lot of iframes where lazy loading was the solution. We found hidden embedded images going into the dev tools to find where the large chunks were visible and then just deleting the unused files. We saw tree shaking not working for barrel exports which was fixed with an ESLint rule, later a updated Next.js version. We saw unused code being prevented in the future with Knip and we saw my favorite one, the emojis where we cached the result of an expensive computation.

All these resources are available online. The web dev Why Speed Matters is a great blog post. Each of these five investigations has its own post on my blog and Marvin's blog includes speeding up JavaScript ecosystem, part six. Parts one through five are quite entertaining as well as our seven onward. That's all I've got for you. Thank you very much y'all. Cheers.

Josh Goldberg
Josh Goldberg
22 min
23 Oct, 2023

Comments

Sign in or register to post your comment.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Building Better Websites with Remix
React Summit Remote Edition 2021React Summit Remote Edition 2021
33 min
Building Better Websites with Remix
Top Content
Remix is a web framework built on React Router that focuses on web fundamentals, accessibility, performance, and flexibility. It delivers real HTML and SEO benefits, and allows for automatic updating of meta tags and styles. It provides features like login functionality, session management, and error handling. Remix is a server-rendered framework that can enhance sites with JavaScript but doesn't require it for basic functionality. It aims to create quality HTML-driven documents and is flexible for use with different web technologies and stacks.
React Compiler - Understanding Idiomatic React (React Forget)
React Advanced 2023React Advanced 2023
33 min
React Compiler - Understanding Idiomatic React (React Forget)
Top Content
Watch video: React Compiler - Understanding Idiomatic React (React Forget)
Joe Savona
Mofei Zhang
2 authors
The Talk discusses React Forget, a compiler built at Meta that aims to optimize client-side React development. It explores the use of memoization to improve performance and the vision of Forget to automatically determine dependencies at build time. Forget is named with an F-word pun and has the potential to optimize server builds and enable dead code elimination. The team plans to make Forget open-source and is focused on ensuring its quality before release.
Using useEffect Effectively
React Advanced 2022React Advanced 2022
30 min
Using useEffect Effectively
Top Content
Today's Talk explores the use of the useEffect hook in React development, covering topics such as fetching data, handling race conditions and cleanup, and optimizing performance. It also discusses the correct use of useEffect in React 18, the distinction between Activity Effects and Action Effects, and the potential misuse of useEffect. The Talk highlights the benefits of using useQuery or SWR for data fetching, the problems with using useEffect for initializing global singletons, and the use of state machines for handling effects. The speaker also recommends exploring the beta React docs and using tools like the stately.ai editor for visualizing state machines.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
Routing in React 18 and Beyond
React Summit 2022React Summit 2022
20 min
Routing in React 18 and Beyond
Top Content
Routing in React 18 brings a native app-like user experience and allows applications to transition between different environments. React Router and Next.js have different approaches to routing, with React Router using component-based routing and Next.js using file system-based routing. React server components provide the primitives to address the disadvantages of multipage applications while maintaining the same user experience. Improving navigation and routing in React involves including loading UI, pre-rendering parts of the screen, and using server components for more performant experiences. Next.js and Remix are moving towards a converging solution by combining component-based routing with file system routing.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Concurrent Rendering Adventures in React 18
React Advanced 2021React Advanced 2021
132 min
Concurrent Rendering Adventures in React 18
Top Content
Featured WorkshopFree
Maurice de Beijer
Maurice de Beijer
With the release of React 18 we finally get the long awaited concurrent rendering. But how is that going to affect your application? What are the benefits of concurrent rendering in React? What do you need to do to switch to concurrent rendering when you upgrade to React 18? And what if you don’t want or can’t use concurrent rendering yet?

There are some behavior changes you need to be aware of! In this workshop we will cover all of those subjects and more.

Join me with your laptop in this interactive workshop. You will see how easy it is to switch to concurrent rendering in your React application. You will learn all about concurrent rendering, SuspenseList, the startTransition API and more.
React Hooks Tips Only the Pros Know
React Summit Remote Edition 2021React Summit Remote Edition 2021
177 min
React Hooks Tips Only the Pros Know
Top Content
Featured Workshop
Maurice de Beijer
Maurice de Beijer
The addition of the hooks API to React was quite a major change. Before hooks most components had to be class based. Now, with hooks, these are often much simpler functional components. Hooks can be really simple to use. Almost deceptively simple. Because there are still plenty of ways you can mess up with hooks. And it often turns out there are many ways where you can improve your components a better understanding of how each React hook can be used.You will learn all about the pros and cons of the various hooks. You will learn when to use useState() versus useReducer(). We will look at using useContext() efficiently. You will see when to use useLayoutEffect() and when useEffect() is better.
React, TypeScript, and TDD
React Advanced 2021React Advanced 2021
174 min
React, TypeScript, and TDD
Top Content
Featured WorkshopFree
Paul Everitt
Paul Everitt
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
Web3 Workshop - Building Your First Dapp
React Advanced 2021React Advanced 2021
145 min
Web3 Workshop - Building Your First Dapp
Top Content
Featured WorkshopFree
Nader Dabit
Nader Dabit
In this workshop, you'll learn how to build your first full stack dapp on the Ethereum blockchain, reading and writing data to the network, and connecting a front end application to the contract you've deployed. By the end of the workshop, you'll understand how to set up a full stack development environment, run a local node, and interact with any smart contract using React, HardHat, and Ethers.js.
Designing Effective Tests With React Testing Library
React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Top Content
Featured Workshop
Josh Justice
Josh Justice
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn