TypeScript Performance: Going Beyond the Surface

Do you ever find yourself pondering how to identify and address performance issues in TypeScript to maximize the effectiveness of your code? If so, join us for a talk on the performance of TypeScript and the techniques you can use to get the most out of your code. We'll delve into various ways to debug performance, explore how to leverage the power of the TypeScript compiler to detect potential performance issues and use the profiling tools available to track down the underlying bottlenecks.

Rate this content
Bookmark
Video Summary and Transcription
The video delves into TypeScript performance, discussing how keeping TypeScript updated is crucial for optimizing compilation times and enhancing performance. It explores how tools like Dependabot and Renovate can automate updates, ensuring you benefit from the latest improvements. The talk also highlights the importance of debugging TypeScript performance issues using diagnostics and the 'generate trace' command, allowing developers to identify performance bottlenecks effectively. It emphasizes the impact of complex types on performance, suggesting that simplifying them can lead to better TypeScript optimization. The video compares JavaScript vs TypeScript performance, noting that TypeScript's static typing can enhance code predictability and safety. Additionally, the incremental compilation flag is mentioned as a way to improve build times by caching previous compilation data. The video is a valuable resource for developers looking to improve TypeScript performance in large codebases, offering insights into various tools and strategies for optimization.

FAQ

The talk on TypeScript performance consists of three main parts: an introduction to performance issues, ways to debug performance problems, and strategies to improve performance.

The TypeScript compilation process involves several steps: scanning the source code, parsing the tokens to create an abstract syntax tree (AST), binding to gather context, checking types and inferring missing types, and finally transforming the AST into JavaScript code.

Keeping TypeScript updated is crucial because newer versions often include performance improvements that can significantly speed up both compilation times and overall development efficiency.

Tools like Dependabot and Renovate can automate the process of keeping TypeScript updated by managing dependencies and applying updates as they become available.

Improving TypeScript performance in large codebases can involve optimizing configurations, reducing complexity of types, and splitting complex types into simpler ones to allow for better caching and quicker compilation times.

Complex types can significantly impact TypeScript's performance by increasing the time and computational power needed for type checking and inference. Simplifying or breaking down complex types can lead to better performance.

The 'generate trace' command in TypeScript generates a trace file that can be analyzed to pinpoint performance issues. This command helps developers understand what parts of their code are causing slowdowns during the compilation process.

The 'incremental' compilation flag in TypeScript enables the compiler to cache information from previous compilations, allowing it to skip unchanged parts of the codebase during subsequent builds, thus improving build times.

To debug performance issues in TypeScript, you can use the TypeScript compiler's diagnostics or extended diagnostics to get detailed information about compiler steps, or use the 'generate trace' command to gain insights into performance bottlenecks.

1. Introduction to TypeScript Performance#

Short description:

Today we're going to talk about TypeScript performance. It will be an overview of the existing tools and how you can use them to help you deal with different performance issues in TypeScript. We will focus on the developer's experience and tooling. Slow compilation time and a lagging editor can be quite annoying and time-consuming. Keep your TypeScript up to date for better performance.

Hey, everyone. Thanks, everyone who's joining. Today we're going to talk about TypeScript performance. So a few words about me before we begin. I work on open source at The Guild, primarily in the GraphQL ecosystem, and I also organize a TypeScript meetup in Poland. You can find me on Twitter as AlexandraSays, or B-rows on GitHub. And also feel free to check out my personal website, alexandra.codes.

So this talk will have three main parts, an introduction to the topic of performance, ways to debug performance, and what to do to improve it. It will be like an overview of the existing tools and how you can use them to help you deal with different performance issues in TypeScript.

So I will start with the introduction. At this conference, we all probably know already what TypeScript is and why we are using it, so I will go straight to the point. So what do I mean by performance today? Usually, when we talk about performance in computing, we talk about the runtime performance, like how fast things are to our users. But today we're going to focus on the developer's experience and on our tooling. And I wanted to talk about it because whenever we are building a feature or fixing a bug in production, we would like to be like this Formula One driver. Our tooling should get us up to speed and it shouldn't slow us down because the better our process, the more value we can deliver to the end users.

And I think that this is an important topic because like slow compilation time and a lagging editor can be quite annoying. And what's also important, it can be quite time consuming. So we'd like to avoid that. So, because TypeScript team is doing like a lot, like really, really a lot type performance improvements, before I even go further with my presentation, I want the first takeaway from this talk to be keep your TypeScript up to date. I'm going to show you a quick example. So a few months ago, or maybe even like half a year ago, I wanted to debug performance issues or like see if there are any in Hasura console. That was like a familiar code base for me, because I used to work there. So I was like, okay, let's check it out. It's a fairly big application. And I run TypeScript compiler and it was 35 seconds, almost 35 seconds. I think we can all agree that this is a lot. And then before doing any debugging and looking to the types, I upgraded TypeScript. Back then, the latest version was 4.9.5. And you can see that it's like three times faster. That's a really, really huge difference.

2. Understanding TypeScript Compiler and Performance#

Short description:

And you can compare the check time, it went down from 31.5 seconds to less than nine. That was huge. So, firstly, thank you TypeScript team for all the performance improvements. And secondly, remember to keep your TypeScript up to date. We can think logically that if we have bigger code base, if we have more code, then TypeScript will work slower considering the amount of code we have. But that's not always the case. I want to show you one example. This is more or less what we are going to talk about today, those kinds of issues, and how to spot them and maybe what to do to avoid them. But first, before going further, I wanted to go over how a compiler works.

And you can compare the check time, it went down from 31.5 seconds to less than nine. That was huge. So, firstly, thank you TypeScript team for all the performance improvements. And secondly, remember to keep your TypeScript up to date. You can use tools like dependabot or others like renovate to make it even easier. But the thing is that it's not always enough.

We can think logically that if we have bigger code base, if we have more code, then TypeScript will work slower considering the amount of code we have. But that's not always the case. And I want to show you one example. So what I have here is a simple page that renders... This is the whole code. It renders four buttons. I have medium button, small button, some big button that is being created from the... This is not the file I wanted to show you. Here's my big button. This is created using style function from style components. I also have a big button anchor and I'm also using the style function. This is the only difference, this is an anchor. So this code renders four buttons. You can see here that the compilation time was almost 14 seconds, and I think we can all agree that that's not ideal. We have only four buttons, imagine if we had eight or something. This is more or less what we are going to talk about today, those kinds of issues, and how to spot them and maybe what to do to avoid them.

But first, before going further, I wanted to go over how a compiler works. What are the steps in the TypeScript compiler, because I think that this knowledge will help us better understand how it affects the performance, and maybe also how it will allow us to pinpoint what step is responsible for our performance issues, and in the long term, it will make us better TypeScript programmers. So the first thing, the most important part, is program. It's like an object that has all the compilation context. The two things that are needed are obviously some TypeScript files and a ts.Config that describes how the compiler should behave. Then we have a scanner step, which scans the source code, like character by character, and converts it into a list of tokens. If there's an invalid token, it will throw an error. So here's an example for this simple one line of code.

3. Understanding TypeScript Parser and AST#

Short description:

We have const keyword, it also recognises whitespaces, we have identifiers, column token, and so on. The next step is parser, which constructs an abstract syntax tree by analyzing the tokens. This tree brings context to the scanner and allows for the identification of variable declarations. Tools like TypeScript AST Viewer and AST Explorer can help visualize the code as an abstract syntax tree.

We have const keyword, it also recognises whitespaces, we have identifiers, column token, and so on. Then the next step is parser. It takes all the tokens, all this list and constructs an abstract syntax string. So you can also say that it brings context to the scanner. So for example, you can see the AST here. So when it goes over the tokens and it says that there's a const keyword and later on there's like a column token or an equal token, it will know that it will be a variable declaration so it can construct this variable declaration node. You can use tools like TypeScript AST Viewer, or there's also AST Explorer to visualise your code as the abstract syntax tree.

4. Understanding the Binder Step#

Short description:

The next step is Binder, which gathers information about the context, including metadata for each node of the tree, scopes, and parent nodes. This information is used by the checker to determine the type of a node. It's an important and expensive step in the compilation process.

Ok so the next step is Binder and this is a very important step. It gathers information about the whole context. So it's quite expensive, it's a single run to the entire AST and it picks up some information that will be used in the later steps. So here's an example. One thing it does, it stores the metadata for each node of the tree. It also keeps track of the scopes. So when we are inside a function, what variables can I use, what's the function scope. Another thing is that it sets up parent nodes on each node, so that later when the checker has to see what's the type of a particular node, it can easily go up the tree. I will explain that in a second.

5. Understanding the Checker Step#

Short description:

The checker step in the TypeScript compiler is responsible for checking if types are assignable to each other and performing type inference. It fills gaps in type information by traversing the abstract syntax tree and using parent information to determine the type of a node. This step is crucial for ensuring type safety in TypeScript code.

Okay, so now we have the checker step. And that includes most of the diagnostics. It has two main responsibilities. One is it checks if types are assignable to each other. And the other one is it does type inference. So if there are any gaps, there's no explicit typing. And if we have one node and we don't yet know what's the type, it's checker's responsibility to figure it out and to fill those gaps. So this is why this parent information about parents is important. Because let's say the checker is on one node, and there's no type information. So maybe it needs to go up the tree because maybe somewhere higher there was an explicit type declaration, or maybe somewhere higher it already figured out the type, so this is why it needs to be done fast. And then once it figures out that, for example, maybe there was explicit type declaration for the function and we are somewhere inside of the function, it can pick up the type and then go again down the tree to fill the gap.

6. Understanding Transformers Step#

Short description:

Transformers strip information about types for JavaScript code and leave only types for declaration files. TypeScript Playground offers a scanner plugin and AST plugin transformers to visualize these steps.

Okay and then we have transformers. So to put it shortly, this step takes the ASD and if we want to have the Javascript code it strips all the information about the types, and if we want to have the creation files it strips all the things related to JavaScript and leaves only the types. And finally we are getting the files that we requested. That is if we requested any because you can also run TSD with no emit flag to only do the type checking bit. So this is more or less what happens in reality it's not that straightforward. There can be like a lot of back and forth between the steps. But I just wanted to give you an overview of the important bits. So you can also try it out in TypeScript Playground. There's a scanner plugin there's also AST plugin transformers so you can visualize some of those steps.

7. Optimizing TypeScript Compilation and Debugging#

Short description:

With a larger codebase and more complex code, TypeScript compilation can be slow. To improve performance, it's important to debug and understand what needs improvement. Running diagnostics or extended diagnostics can provide basic information about the codebase. The diagnostics output includes parseTime, bindTime, checkTime, and emitTime, which can help identify areas for optimization. Additionally, tools like TSC Diagnostics Action and webTrimUp CLI can automate and visualize the diagnostics process. Checking the number of files and lines in the output can help determine if the TypeScript configuration is correct. The show config flag can be used to view the final configuration. For checking types, the generate trace flag and a demo project can be helpful.

Okay so we saw what's going on and obviously with a larger codebase and more complex code that can be slow. All that like traversing through the AST and all the tokenizing on the code that can take a lot of time. And what's the easiest way to be faster? And you can think about this question in terms of not only TypeScript compiler but in general if you have a long to-do list and you want to be done for the day and enjoy your evening, what can you do to be done faster? Well, the answer could be do less stuff. And well the thing is with TypeScript compiler that we can't eliminate some of those steps but we can make it do less work and we're gonna see how later.

So the first part is debugging because without knowing what we should improve, we can't really improve anything. So the question is that my build takes forever, my IDE is lagging, and now what can I do? So the first thing that I usually do is run diagnostics or extended diagnostics and this will give us some basic information about what's going on in the code base. So this is an output from the diagnostics plug. You can see how it maps to different compiler steps, we have this parseTime, bindTime, checkTime, and emitTime. And if you want something that makes it easier, like something that automates it without you having to run it on your code base every now and then, I created this very, very minimalistic GitHub Action. It's called TSC Diagnostics Action. And basically, for every PR, it gives you this diagnostics comparison. You can configure it to have different outputs. But going back to the output of the diagnostics pack. So, the first question you can ask yourself is, do these numbers, the number of files, lines of TypeScript, JavaScript, JSON, and so on, do these numbers roughly correspond to the number of files in your project? Because if not, then maybe TypeScript is picking up too many files and maybe that means that your test configuration is not correct. So, one thing that you can do is you can run a list files, like this TSC list files, with list files plug, and see what exactly are all the files that TypeScript is picking up. While the output from this plug is not very user-friendly and not very easy to read, you can use this webTrimUp CLI tool to visualize it. It will open this HTML in your browser with everything that is important visualized, so you can see what's taking, what files are significantly huge, and so on. It will help you spot that maybe you are compiling something that you don't really want to. Okay, another thing is high program or write time. So that also can indicate that the configuration is not correct. And I wanted to to show you another flag. There is show config. It's especially useful when you have like, you use extends in your config, and you extend one config after another. So sometimes it's quite difficult to know what's actually the final configuration. And that show config flag can help you with that because this will print like the actual configuration that TypeScript is using for your project. Okay, and now the most important part, and I think the most interesting part, a hi check type. So for that, I'm usually using generate trace flag, and I'm going to show you a quick demo. I have one project here. I just picked a random project from the GitHub Explorer page, basically. I just wanted to play with different projects.

8. Analyzing TypeScript Build Performance#

Short description:

I run the generate trace command using yarn tsc hyphen hyphen generate trace. The trace JSON file can be loaded in the browser, allowing you to analyze the performance. By examining the trace file, you can identify problematic files and pinpoint slow type checking. In this example, the TRPC TS file and underscore apt-get function seem to be causing the slowdown. TypeScript suggests adding an explicit type annotation to resolve the issue. This is valuable when working with a large codebase or when trying to identify changes that may have caused a slowdown.

And I run the generate trace command. So basically, to do this, you run this yarn tsc hyphen hyphen generate trace, and you also have to provide out directory. I already did this before. Not to spend time on that, I have it here. It generates trace JSON file and types JSON file.

Now what can I do with this trace JSON, because you can also probably see that it's not very user-friendly to read. I'm loading it in the browser. So I have it here. So you can open, like if you're using Chrome, you can open like about tracing, or you can go to like you can open developer tools and go to performance stuff and load it there. There is a button, you click it, and then you pick the trace file that you just generated. And you're going to see something like this.

So now when I open this file, I can see that maybe there's something going wrong. So in this example, one thing I notice is that around here, there seems to be a lot going on. And if I click on this check source file, I can also learn what is the file that is being problematic. And what that gives me already some information. I know where to look for my issues. And then if I go down with this check expressions, you can see that the metadata here is slightly different. I also have the path, but I have the position and end. This check source file only had the path, so basically, you can pinpoint where the type checking is slow. So I can go even lower. I see now that I'm in the TRPC TS file. Now I'm in underscore apt-get, and I guess this one is something I can look into. So if I open this file, I will copy it, go back to the project. So if I open this, I pasted the error in case it doesn't work during this presentation, but I can already see that TypeScript tells me that the input type of this node exceeds the maximum length the compiler will serialize. The explicit type annotation is needed. So, okay, I know where the problem is, what's causing my TypeScript build time to be slower. And I know that this is the place that I probably want to refactor. This is quite useful, because imagine you're working on a large code base, and, or maybe you went on vacation and someone, like other people, were working on the code base. You go back and you notice that the build time is much longer. And how do you proceed? Where do you look for it? Do you go over all the PRs, all the comments that people added to see what was possibly changed that caused the issue? Well, that's one way.

9. Analyzing Type Comparison with Generate Trace#

Short description:

Using the generate trace command allows for faster identification of problematic files and slow type checking. By copying the ID and navigating to the code base, you can locate the type being compared. The source and target types, along with additional information such as type arguments, can be found in the type's JSON file.

But using this generate trace will get you there much, much faster. And I also wanted to show you another thing. Here is a second place that looks like it takes quite longer, this particular file. It's this handle children event types test, PS. So here we have other boxes. We have this check the part note with this kind of metadata. We have check variable decoration, check expression. We also have something like structure tied related to. And here we have source id and target id. So those are like ideas of the types that are being compared.

And now what to do with that? Well, there's no information to trace itself. So you don't really know what the type is. But if we can copy this id and go to the code base. And if we go to this type's JSON file that we didn't look before. Let me make more space. There's something like go to line in VS Code. And you can go to this particular line. We have this id. And now we know what is the type that is being compared to. We know that the first declaration is in just mock-extended package. And we also know like what's the start, what's the end. And sometimes there's also additional information. So we know that this is our source, that is being compared to. And the target, I will go to line again, and the target is Prisma Client. We also know like what are the type arguments. This is a generic type. We have this first declaration, points to node modules. And sometimes we also have like a display information. So it's not sure if I can show. For example, here we have this display, it shows you exactly the code here in this space.

10. Working with Trace JSON and Types JSON#

Short description:

And now if we go to this file that we are now debugging, we can see that there is in fact something like Prisma Mock. This is more or less how you work with the trace JSON and the types JSON file that was generated. You can also use a tool called analyze-trace to get information about hotspots in your code. We're also working on an extension called TS-PERP to make the experience easier.

And now if we go to this file that we are now debugging, we can see that there is in fact something like Prisma Mock. And this is where this just mock extended is being used. So this is more or less how you work with the trace JSON and the types JSON file that was generated.

Okay. Now let's go back to the slides. This one. Okay. You can also use tool called analyze-trace and you're provided with the output directory that you like where the types JSON and trace JSON were generated and it will give you information about some hotspots in your code out of the box.

Now there's also something that me and Daniel is also speaking at this conference are thinking about and slowly working on. We wanted to make this experience a bit easier. So we're thinking about something called like this is a working name TS-PERP and this is going to be an, for now, a VS code extension. I made a recording in case something goes wrong because this is a work in progress. So I'm going to show you basically this extension generates the trace for you and you run one command to display the trace. So it kind of saves you some back and forth between the browser and your project and we have also some other ideas so stay tuned.

11. Improving TypeScript Performance#

Short description:

There are a few ways to encounter problems with TypeScript performance. Firstly, if it doesn't work as intended, check your configuration settings. The TypeScript team has created a performance wiki with useful tips, such as naming complex types and extracting them to separate type aliases. Additionally, simplifying types can improve performance, although it may require code refactoring.

Okay, now the slides and the improving part. So there are a few ways in how you can have problems basically. So the first one is it doesn't work the way it's intended to. So that probably means that you have to check your configuration, maybe your exclude, include or pys are not configured correctly. You can reference this tsconfig page, like you will learn there about all the configuration options and how to set it to actually what you want. And now let's say we have that covered. It does work the way it's intended to, but still it's doing too much work. So now we have a few improvements that we can do like TypeScript team created this performance wiki with a lot of very, very useful tips. I'm going to show you just a few with some examples.

So here I have two examples. I think I only have time to show you one. This is just a change from Zote runtime validation library. So basically what happened here, you can see that this Zote, I'll make it slightly bigger. Okay, you can see that the Zote formatted error was quite complex. Like there was a lot going on. We have conditional types and we can guess that, you know, CacheGrid probably has a lot of work to do there. So basically what the author of DPR did was to extract the complex part, the one with conditional typings, to a separate type alias and use it for ZotFormattedError. And I think in this description, we can see what was the improvement. So we can see that the check time went from 31 to 20 seconds. That's, I would say that that's a lot. And why that happened? It's because Typescript can now cache this type alias so it doesn't have to recalculate that every time this ZotFormattedError is used. And that caused this improvement. Okay, so this is the first tip, name complex types, extract them to separate type aliases. Another thing is to make your types simpler. I showed you this demo with startComponents, right? So the reason why it's taking that much time is because I'm using a lot of complex higher level functions from startComponents, this Time, this Memo. So this is actually from React and startComponents, but this one is from startComponents. So the Typescript has to do like a lot of inference to figure out the props. So in that case, there's like no easy fix, there's not like one line fix for the typings. Like something I needed to do here was to actually like refactor my code. And I have it here.

12. Optimizing TypeScript with Explicit Typing#

Short description:

I created new components without using this type, I provided the props, I typed them explicitly. Running extended diagnostics showed a significant improvement in performance. Simplicity can lead to better performance. In another example, I found that explicitly providing a generic parameter in a GraphQL code generator helped improve TypeScript's inference. Debugging a slow file revealed that inferring the first parameter caused the slowdown. Providing the generic parameter explicitly eliminated hotspots in the codebase.

I just basically, I created new components without using this type, I provided the props, I typed them explicitly. So I have it. I have it here. I have those types and here my basic button looks like this.

So now if I run the extended diagnostics to see the difference, I think it will be pnp. Good, yes. Yes. You can see that this is much faster. It's only two seconds. Okay, so that was another example. And the takeaway from here is to like not show up sometimes simpler means better. And simpler also can mean more performance.

And another quite interesting example, something I found in one of the projects I was working on, a GraphQL code generator. You can sometimes help TypeScript. But there is, I added this, if you really need to. Like, you can rely on TypeScript inference for most of the time. But I just, I wanted to share this particular example because I found it interesting.

So, I started by showing you the trace. Yeah, here we go. So this is the generative trace from this project. You can see that this file, this Babel.ts is taking a lot of time. So, I went, like, I debugged it. It turned out to be around this declarer function and particularly about inferring this first parameter, the options here. And what I did, I provided this generic parameter explicitly. So I typed this client Babel preset options as what we actually use. And that helped quite a lot. I will show you the trace after the change, so you can see that there's no more hotspots in the code base. And that happened because now you can see that this, the second generic parameter, it has default type. There's this Babel.plugin object. But because the first one doesn't, a TypeScript had, like, it always needed to run the inference for this.

13. Enhancing TypeScript Performance: Practical Tips#

Short description:

TypeScript performance can be improved by being reasonable and not overdoing it. Avoid declaring a large number of elements in a union type, as it can significantly slow down compilation. If improvements and configuration fixes have been made but the performance is still slow, the incremental flag can be used to cache compilation information. When opening an issue, ensure you're using the latest TypeScript version and include extended diagnostics output and trace from the generated trace usage. Debugging performance issues with the generated trace output can help identify areas for improvement. For more information, visit alexana.code and check the TypeScript congress entry.

So because I provided the first one, so I basically did something here, then TypeScript is able to default to this one for the second parameter. So that was my set of another example, and the difference was that the check time went from one and a half seconds to 0.88. And I think the last example I wanted to show you is to be reasonable. So TypeScript has a lot of really, really amazing features. One of them are, I will slowly close this, templates string literals. So you can kind of... like you can do a lot of that. But sometimes maybe you want to ask yourself how much you want to do, because this is an example from one of the issues reported in TypeScript. So basically we have this type, full date string, and like the year, that month, and the day is all being declared with templates. And you can see, if I hover it, that there are more than 74 thousand elements in this union. And that's quite a lot. So even though I have a really really short code here, like it's 30 lines of code, 29 if I count correctly, this takes, what is that, almost two minutes to compile. That's a lot. And the check time is, yeah, it's slightly less than the total time. So it's almost like, also almost like two minutes. So the takeaway here is also to be reasonable. Don't overdo it and don't skateboard on Rake.

So now let's say that we did some improvements and we also fixed the config if there were any issues. So it works the way it's intended to. It's doing the minimum required work, but it's still slow. Now what you can do is you can use the incremental flag. You can set it in the compiler options. So that will let TypeScript cache some information about the compilation. So whenever you recompile the project, TypeScript will calculate what's the least possible effort to do so. So it won't compile files that were changed, for example. And if it's still bad, you can open a new issues. So some tips to do so. Firstly, make sure that you're using the latest TypeScript. Maybe your issues are already solved. You also have to include extended diagnostics output, the trace from the generated trace usage and a minimal reproduction or maybe a link to the repository, if that's possible. So to summarize, we saw some tools and we saw how to debug performance issues with the generated trace output. I hope that it will be useful to you. And I really, really hope that you won't have to deal with any performance issues. But if you do, here you have the overview of the steps that you can take to improve the code base. Okay, so that's all from me. All of that you can find on my website if you go to alexana.code and to my speaking page. And if you find the TypeScript congress entry, you'll find the notes from this presentation.

Aleksandra Sikora
Aleksandra Sikora
34 min
21 Sep, 2023

Comments

Sign in or register to post your comment.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.
Optimizing HTML5 Games: 10 Years of Learnings
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Optimizing HTML5 Games: 10 Years of Learnings
Top Content
Watch video: Optimizing HTML5 Games: 10 Years of Learnings
PlayCanvas is an open-source game engine used by game developers worldwide. Optimization is crucial for HTML5 games, focusing on load times and frame rate. Texture and mesh optimization can significantly reduce download sizes. GLTF and GLB formats offer smaller file sizes and faster parsing times. Compressing game resources and using efficient file formats can improve load times. Framerate optimization and resolution scaling are important for better performance. Managing draw calls and using batching techniques can optimize performance. Browser DevTools, such as Chrome and Firefox, are useful for debugging and profiling. Detecting device performance and optimizing based on specific devices can improve game performance. Apple is making progress with WebGPU implementation. HTML5 games can be shipped to the App Store using Cordova.
How React Compiler Performs on Real Code
React Advanced 2024React Advanced 2024
31 min
How React Compiler Performs on Real Code
Top Content
I'm Nadia, a developer experienced in performance, re-renders, and React. The React team released the React compiler, which eliminates the need for memoization. The compiler optimizes code by automatically memoizing components, props, and hook dependencies. It shows promise in managing changing references and improving performance. Real app testing and synthetic examples have been used to evaluate its effectiveness. The impact on initial load performance is minimal, but further investigation is needed for interactions performance. The React query library simplifies data fetching and caching. The compiler has limitations and may not catch every re-render, especially with external libraries. Enabling the compiler can improve performance but manual memorization is still necessary for optimal results. There are risks of overreliance and messy code, but the compiler can be used file by file or folder by folder with thorough testing. Practice makes incredible cats. Thank you, Nadia!

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Mastering advanced concepts in TypeScript
React Summit US 2023React Summit US 2023
132 min
Mastering advanced concepts in TypeScript
Top Content
Featured WorkshopFree
Jiri Lojda
Jiri Lojda
TypeScript is not just types and interfaces. Join this workshop to master more advanced features of TypeScript that will make your code bullet-proof. We will cover conditional types and infer notation, template strings and how to map over union types and object/array properties. Each topic will be demonstrated on a sample application that was written with basic types or no types at all and we will together improve the code so you get more familiar with each feature and can bring this new knowledge directly into your projects.
You will learn:- - What are conditional types and infer notation- What are template strings- How to map over union types and object/array properties.
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
Featured WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
WorkshopFree
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
High-performance Next.js
React Summit 2022React Summit 2022
50 min
High-performance Next.js
Workshop
Michele Riva
Michele Riva
Next.js is a compelling framework that makes many tasks effortless by providing many out-of-the-box solutions. But as soon as our app needs to scale, it is essential to maintain high performance without compromising maintenance and server costs. In this workshop, we will see how to analyze Next.js performances, resources usage, how to scale it, and how to make the right decisions while writing the application architecture.