High-Speed Web Applications: Beyond the Basics

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Knowing how to run performance tests on your web application properly is one thing, and putting those metrics to good use is another. And both these aspects are crucial to the overall success of your performance optimization efforts. However, it can be quite an endeavor at times for it means you need to have a precise understanding of all the ins and outs of both performance data and performance tooling. This talk will shed light on how to overcome this challenge and walk you through the pitfalls and tricks of the trade of Chrome DevTools, providing you with a complete roadmap for performance analysis and optimization.

This talk has been presented at JSNation 2022, check out the latest edition of this JavaScript Conference.

FAQ

The speaker is Michael Lutke, who specializes in performance optimizations, Angular, and reactive programming.

The talk focuses on the latest and greatest features in Chrome DevTools for optimizing web application performance.

Michael Lutke runs a company named Pushbased.

The 'Network tab' in Chrome DevTools provides information about all the HTTP requests made by your application, including connection time, data size, and request timings.

'Fetch Priority' is a feature in Chrome DevTools that allows developers to prioritize certain HTTP requests to optimize the largest contentful paint of an image or other resources.

The 'pre-connect' attribute can improve network performance by setting up connections to API endpoints at the start of the application, thereby saving connection time for future requests.

The 'Performance tab' in Chrome DevTools is used to analyze the performance of web applications, including identifying long tasks and frame drops that affect user interactions.

'Content visibility' is a cutting-edge CSS feature that optimizes rendering by only painting visible content, thus reducing layout and paint times.

'User Flow' is a new tool in Chrome DevTools, currently available in Canary Chrome, that allows developers to measure runtime performance in the browser through different measurement modes such as navigation, time span, and snapshot.

'Preload' and 'prefetch' attributes can be used to load resources at the beginning of the page load or before navigation, respectively, improving overall page performance by fetching important resources earlier.

Michael Hladky
Michael Hladky
30 min
20 Jun, 2022

Comments

Sign in or register to post your comment.
Video Summary and Transcription
This talk covers the latest features in Chrome DevTools, including network tab analysis, performance tab optimization, and user flows. It discusses optimizing HTTP requests with fetch priority to improve loading time. The performance tab provides insights on frame drops, long tasks, and the importance of minimizing total blocking time. The talk also highlights the optimization of page rendering and introduces user flows in Chrome DevTools.

1. Introduction to Chrome DevTools

Short description:

Hello and welcome to my talk, High Speed Web Applications Beyond the Basics. I will cover the latest features in Chrome DevTools, including network tab analysis, performance tab optimization, and user flows. Let's start with the network tab, where you can analyze HTTP requests and use fetch priority to optimize content. In the performance tab, I'll demonstrate optimizations using content visibility and scheduling. Lastly, I'll introduce user flows and pitch the latest tools for measuring runtime performance.

Hello and welcome to my talk, High Speed Web Applications Beyond the Basics, a talk about the latest and greatest features in Chrome DevTools. Let me quickly introduce myself. My name is Michael, Michael Lutke. Very hard to read, write and pronounce, so let's stick with Michael. What I do is, I do consultings, trainings and workshops in the field of performance optimizations, Angular and reactive programming. I also run a company that is named Pushbased. You can visit it, just click on the link in my slides.

But now let's see what is on the agenda. First of all I will talk about the network tab. I will show you what you can see in the network tab and then I will try to look at some latest features. One of the cool features that are shipped in Chrome is fetch priority and I will use fetch priority to optimize the largest content for paint with an image as well as with http requests. Later on I will show you how to look at the performance tab. This is not really easy because a lot of information and I hope or let's say I promise that after the talk you will be able to at least have a little bit more understanding on what you will see there and what to look at. To demonstrate some optimizations in the performance tab, I will use content visibility, one very nice cutting edge CSS feature and I will also introduce you to scheduling and chunking of work in the main thread.

At the very end of my talk, some really, really exciting stuff I want to talk about user flows. User flow is basically a new tool that is at the moment only available in Canary Chrome, and it enables us to completely new ways, how to measure runtime performance in the browser. In the end, I will pitch to you the latest and coolest tools on user flow, how to use them and also how to integrate that stuff in your CI. With no further pauses, I will jump right into network analysis and the Network tab. So what you see here in this tool is first of all, I selected the Network tab and then you have a lot of information present. A lot of information about all the HTTP requests that are done from your application. And if you have a closer look on the right part of that slide here, you will see the waterfall diagram. In the waterfall diagram, you basically see a time bar chart thingy that displays all our HTTP requests, their start, their end, and what time they are made up of. If you hover over one of those tabs, you will see the request timing. And the request timing can show you some information about connection time, how big the amount of data was, and all the other times and durations that were required to make up the whole receiving of that data. In this slide you see a column that basically tells us about the priority of HTTP requests. We can see that some of those HTTP requests are more important, have a higher priority than others, and I want to leverage one of the latest features Fetch Priority to demonstrate what you can achieve with priority in your application. Without more information on the Network tab, I will straight go into practice and show you how we can change all the requests that are done and how we can improve them. One of the first things I want to improve, also visible in the network tab of course, is the connection time. In this slide you see at the very top an un-optimized version of two HTTP requests to two different domains and as you can see there is an orange block that connects and then a blue block that downloads, another orange block that connects and another blue block that downloads. So if we leverage the pre-connect attribute on our links, we can basically tell the browser, look to those two API endpoints we will fire requests in the future, so why don't you just set up the connection right at the start of the application and then we can save the connection time later on.

2. Optimizing HTTP Requests with Fetch Priority

Short description:

This section discusses the parallelization of connection locks, the priority of HTTP requests, and the use of fetch priority to optimize the largest contentful paint of an image. The example demonstrates the improvement in load time and the importance of having the largest contentful paint at the beginning. The next optimization involves leveraging fetch priority in HTTP requests.

This is demonstrated in the lower part of the picture and you can see that both connection locks are now parallelized at the very beginning and the whole chart is a lot shorter.

The next thing, and this is the fancy new cool stuff, is the priority of those HTTP requests. Again, in this chart you see an unoptimized version at the top, some execution of script, some fetching a resource A, fetching a resource B, and then rendering stuff.

Of course, rendering an image is more important than executing some script or fetching some resources that are used later on. So, the first thing that we do is we should make all the yellow scripting blocks asynchronously and non-blocking. This can be achieved by the defer, the preload, or the prefetch attributes. Deferring scripts just means move that script to the very end of the queue and go on with processing, with parsing of your HTML. And preloading and prefetching means basically that I try to get data at the very beginning of let's say the part that is not visible in the page. So preloading would be preloading resource sources that are accessed at a later point in time on this very page. And prefetching could mean preloading some stuff that is used after a navigation.

With those three things we can already go far but there is another really really fancy and very very helpful feature fetch priority. So with fetch priority we can basically determine on which of my HTTP requests have more priority than others and I want to use it to update the largest contentful paint of an image. If we look at this code snippet here we see two links that fetch some hero images and one of those two images is more important than the other one. So normally just by the order of HTML content we would first fetch hero image 1 and later on hero image 2. But now with fetch priority it can tell the browser that the second image, even if it is later on in time has more priority than the first one and the browser would switch execution of those two HTTP requests and fetch the second one earlier in time.

How would that look in practice? So, I took ObservableHQ as a dummy website and what we see here is a video image, or like I said, a small image of a video that we'll start to play later on and this is definitely the largest contentful pane, the most important part the user should see at the beginning. By applying some tweaks to the HTML and using prefetch, we end up with the following improvement. So what you see at the top is the first line of this movie strip shows us the default page and the second line of this movie strip shows what is the outcome of my optimization. There are two things different. First of all, the whole chart is way shorter now. I basically went from total 7 seconds to 4.5 seconds. But the really important and interesting part here is the largest content for paint is now present at the very beginning. So I went from 7 seconds of the largest content for paint which you can see here at the top to 2.5 seconds. This is also what is visible here in the detailed diagram at the bottom. And you can see that the image is really the first thing visible and then after that there is some fetching. But the image is always visible and gives a very nice user experience for users that want to consume this video or at least want to see a first sneak peak.

The next optimization that I want to do is I want to use or leverage fetch priority in HTTP requests. So when you use the fetch API you now can also give it an importance of this HTTP request and this is done by just applying another configuration as you see here. With this technique let's see what I did in practice with it. If we have a look at the page we see two different dynamic contents on the page.

3. Optimizing Load with Fetch Priority

Short description:

We can optimize the order of HTTP requests using fetch priority, ensuring that critical content is fetched first. This feature allows us to improve the loading time of our web applications.

We see a list of movies and a side menu with a section that is made up of dynamic fetched menu items. And as you can see those HTTP requests are fired quite late in time. So when I apply fetch priority and you have a look on the next slide, you can see that I moved this stuff to the very beginning and I was also able to basically shift the order of those two HTTP requests so that the images of the movie list is fetched first and after that the dynamic list in the side menu. Pretty cool, pretty exciting stuff. Everything that you saw is basically possible with this new cool feature Fetch Priority.

4. Performance Tab and Long Tasks

Short description:

Next, we'll dive into the performance tab, which provides valuable insights but can be complex to interpret. We'll explore frame drops, long tasks, and the importance of allowing the browser to process user interactions quickly. Long tasks are identified by red areas or red triangles, and the overview at the top shows the frames per second rate. We'll aim to minimize long tasks and total blocking time to improve performance.

What's next? Next is the performance tab. The performance tab is one of the most insightful, but also most complicated charts to read when it comes to performance tools. In the next slide, I want to give you a sneak peek on what you can look at and also how to improve that stuff. Let's start with what a frame drop or a long task is. First of all, a user always wants to interact with your page. Interaction means clicking, scrolling, or any other stuff that could happen to the keyboard or over the keyboard. One of the most important parts is to give the browser the chance to process those interactions whenever it is needed first or let's say as fast as possible. If you look at the chart, you see grey boxes and those grey boxes are so called tasks. A task is basically a unit of work that the browser needs to process before it can do anything else, for example, reacting to a user input. We can spot those long tasks, tasks that took too long to block user input or the processing of user input by this red area or by the small red triangle that you can see on the top right here. Another place where you can spot that stuff is also the overview at the very top, there you see these red bars and the green squiggles and those two things basically tell us A where our long tasks are and B how's the frames per second rate and if the frames per second are consistent you can assume that also our tasks are not too much blocking. At the very bottom number 3 you see an overview in this case a total time of our long tasks and their blocking time and total blocking time is one of the heaviest rated measures in for example the lighthouse score and we should always try to reduce long tasks or total blocking time to a minimum.

5. Understanding Single Tasks and Long Tasks

Short description:

Let's now zoom in and understand how a single task looks. The grey box marks the task, and the color indicates the type of work. We see the details of what was scripted, layouted, or painted. Long tasks are marked with a red triangle and indicate overtime. We aim to eliminate tasks longer than 50 milliseconds.

Let's now zoom in a little more and understand how one single task can look like. So in this picture we see one task in detail. We see that at the very top is a grey box. The grey box marks of course the stuff but what we also see is the type of work. Yellow, purple or green. Scripting, layouting or painting. Below all of that you can see the details. What exactly got scripted? Or what exactly got layouted or painted? In this slide you see that this task is marked as a long task and we see the overtime area in this red dashed, or not dashed, but sharpaed lines and then we also see the long task flag, the red triangle at the top right corner of every gray box of every task that is marked as a long task. This information is very important for us because this is what we need to get rid of. What we need to get rid of is everything that is longer than 50 milliseconds. As you can see here, 50 milliseconds is an okay-ish long task and everything that is over 50 milliseconds is basically the overtime of a task.

6. Optimizing Page Relay Outing and Paint

Short description:

With the latest browser features available in Edge and Chrome, we can optimize page relay outing and paint. Lab measurements show significant improvements in paint and layouting times. Field data demonstrates the impact of optimizing rendering time and introduces scheduling and the frame budget to reduce total blocking time. Optimizing total blocking time and input delay is also showcased, along with the exciting new feature of User Flows in Chrome DevTools.

With all that information clicked, we jump to page relay outing and paint. This is the purple and green stuff that I want to show you how to optimize and I want to use latest browser features available in Edge and also in Chrome. In this slideshow here, you can see from CanIUse where it is supported and I have told you already it is supported unfortunately only in edge and chrome, but all other browsers are working heavily to get that shipped.

Now as we understand where you can use it, let's see what could be the potential impact. This here is a lab measurement of taking one page in an unoptimized state, one page optimized with content visibility or nodes on screen, which means all content is visible within the page and then all content off screen, which means it is somewhere below your screen size, not visible to the user at the moment. If we look at the numbers, the top numbers are in green, paint, so we can go from an unoptimized with six milliseconds paint to on screen optimized one millisecond paint and off screen really, really nice to 0.1 milliseconds off screen. This is really an interesting impact, I would say a tremendous impact. Even cooler for layouting, this is the lower part here in this slide, you see 11 milliseconds update layer tree and paint with the optimization everything on screen 0.5 milliseconds and later on everything off screen it's only 61 microseconds, which is a really, really interesting number and heavily dramatic impact.

As lab measures are nice to learn and understand but what we are really interested in is the field data. So let's see what I achieved in the wild. Optimizing rendering time is the first thing I want to show you and in the next slide we see again from observable HQ at the top some animation and layouting work that is present here. And the longest task in this layouting work took basically 260 milliseconds. Of course a long task because it is longer than 50 milliseconds and with my application of content visibility I was able to went down to 15 milliseconds for the same work done in the same website. So this is a tremendous improvement and really nice to see what is possible with just one or two slight changes in your application. The next thing that I want to demonstrate or introduce to you is scheduling and the frame budget. This is mostly important to get rid of scripting or to at least reduce the total blocking time of scripting. What we see in this slide here is scheduling of work and how it could improve input to NextPaint, total time to interactive and total blocking time. Imagine there is a button click and this button click would cause some work and instead of executing that work right away I take this package of work and move it into the next task into the next gray box and execute it later on in time. In this very example I used animation frame to do the update because it was a visual update that caused some pixels to change but this is basically also possible with a lot of other scheduling APIs. So what is marked here is the scheduling moment in time and the scheduling duration. Let's see what theoretically would improve. What we see here in the hot pink dashed horizontal line is the next possible moment when the browser could process user interaction. This is a very nice improvement and it also increased time to interactive by tremendous amounts as you can see from this first bracket here and we also reduced total blocking time by 50 milliseconds because every task is ok below 50 milliseconds and now we made two instead of one task. Pretty amazing improvements and this is just the theory. In practice, optimizing total blocking time and input delay is the next thing that I want to show you and in this very example I want to demonstrate again the movies application and the bootstrapping of that application. If we look at this diagram here we basically see one huge task that is processing some JavaScript files and then executing the framework. After some optimizations on the very left we still have a little bit of a big task because optimizing a webpack bundle and its compilation is not that easy but everything that is framework land is now optimized and we can see a lot of hot pink dashed vertical lines and those basically are all separated tasks that in between give the browser the opportunity to process user input and as you can see all of those tasks are no long tasks, so pretty, pretty amazing improvements that we could achieve with scheduling and junking. The last and most exciting thing that I can demonstrate to you is User Flows. User flows is one of the fancy new features that Chrome DevTools will ship, at the moment it is only accessible in Chrome Canary, but you have to know that there is an open source library that you can use and install already today and run all that new stuff fully stable in your CI or from your CLI. The link here is github.com slash push dash based slash user flow.

7. User Flows and Chrome Lighthouse

Short description:

What are user flows? Chrome Lighthouse now enables three measurement modes: navigation, time span, and snapshot. The time span measurement mode allows you to record user interactions within a specific duration of time. A user flow report in Chrome DevTools shows multiple steps, such as ordering a coffee online. The report viewer displays the navigation to the coffee cart application and the details of the recorded time span for selecting a coffee.

Please have a look, very interesting. So what is user flows? If you imagine that Chrome has this tool that's called Chrome Lighthouse and Lighthouse so far was only able to measure bootstrap performance that measured the moment when you navigated to a page the first time, and this was always a cold navigation and was always as I said, limited to bootstrap performance only. With version 9 something. Lighthouse now enables 3 measurement modes, navigation, time span and snapshot.

Navigation is basically the default Lighthouse measurement that was present ever since. So any measurement was a navigation measurement or navigation mode until version 9. The second very cool measurement mode and for me most exciting one is the time span measurement mode, where you can start and stop recording of a duration of time and within that duration of time you can run some user interactions. For example, fully automated with Puppeteer. And at the very end we see snapshot and the snapshot is basically a way to take a so-called snapshot of your page at any moment in time. And it is very useful to determine accessibility measures and other static stuff to a later point in time, not only at navigation.

Let's see how this could look in practice. What you see here and in a second I will show it to you live is a user flow report, basically a report that looks quite similar to a Lighthouse report, but as you can see has multiple different steps. And what I did here is I basically ordered a coffee online. Let me quickly exit the slides and demonstrate to you how this report looks in real life. I will open up Chrome dev tools and I will hopefully, if I don't make a mistake, be able to drag and drop the report directly here. As you can see, pop. So this is the normal lighthouse report viewer and it already supports user flows. The first one, let me click on summary is a navigation to this coffee cart application and I will open the application for you just that you can see a very primitive application. I can select a coffee, click on that stuff, could enter some user data and then order basically a coffee. I see the confirmation message at the bottom and that's it. What I wanted to record. So let's go back and let's have a look at this. What I did was a navigation to this page and from the numbers you can see this is a default lighthouse score. I can click on it and I see the full details. I have all my web vitals here, some images, the time, the tree map and all the diagnostics visible at the bottom. As a later point in time, I recorded the time span of selecting one coffee. So from a non selected coffee to hovering over a coffee and then clicking that coffee and select it. This is recorded here. As you can see it provides us a reduced number of our recording. We see total blocking time, cumulative layout shifts and other stuff.

8. Userflow Insights and Conclusion

Short description:

After selecting a coffee, I wanted to ensure accessibility, CO terms, and best practices were met. The numbers show some reduction, but still provide valuable insights. There are more measurements for checkout and order submission. I recommend checking out the GitHub link for Userflow. Thank you for your time.

And if I scroll more down, we also see all the detailed recordings. The last one is selecting a coffee snapshot. So after I selected a coffee, I wanted to make sure, is all the accessibility still given? Are CO terms met, is best practice still a thing? And as you can see, those numbers here show us that it is reduced, but it still gives us a lot of insights on what we can do with those new tools.

Of course, there are more measurements, again, another time spent for checkout, another snapshot for checkout, another time spent for the order submission and another snapshot for order submission. Really nice tools. I highly recommend you that you check out the link from before on GitHub push-based with a dash slash user flow, and you can directly use it already in your project, CLI, or even CI.

Let me jump back to the slides and open them full screen. And let me say, thanks for your time. This is the very end of this very small, dense and brief talk about some latest and greatest features. I know it was quite a lot in, lot in some minimum time. So if you have any question, feel free to shoot me an email, michael.latki8-push-based.io. I'm also on Twitter and more most, probably more active on Twitter than on any other platform. And again, the GitHub link to the latest and coolest feature, Userflow, please check it out. And again, thanks a lot for your time and see you later. Enjoy the rest of the conference.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
Modern Web Debugging
JSNation 2023JSNation 2023
29 min
Modern Web Debugging
Top Content
This Talk discusses modern web debugging and the latest updates in Chrome DevTools. It highlights new features that help pinpoint issues quicker, improved file visibility and source mapping, and ignoring and configuring files. The Breakpoints panel in DevTools has been redesigned for easier access and management. The Talk also covers the challenges of debugging with source maps and the efforts to standardize the source map format. Lastly, it provides tips for improving productivity with DevTools and emphasizes the importance of reporting bugs and using source maps for debugging production code.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
How React Compiler Performs on Real Code
React Advanced 2024React Advanced 2024
31 min
How React Compiler Performs on Real Code
Top Content
I'm Nadia, a developer experienced in performance, re-renders, and React. The React team released the React compiler, which eliminates the need for memoization. The compiler optimizes code by automatically memoizing components, props, and hook dependencies. It shows promise in managing changing references and improving performance. Real app testing and synthetic examples have been used to evaluate its effectiveness. The impact on initial load performance is minimal, but further investigation is needed for interactions performance. The React query library simplifies data fetching and caching. The compiler has limitations and may not catch every re-render, especially with external libraries. Enabling the compiler can improve performance but manual memorization is still necessary for optimal results. There are risks of overreliance and messy code, but the compiler can be used file by file or folder by folder with thorough testing. Practice makes incredible cats. Thank you, Nadia!
Optimizing HTML5 Games: 10 Years of Learnings
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Optimizing HTML5 Games: 10 Years of Learnings
Top Content
Watch video: Optimizing HTML5 Games: 10 Years of Learnings
PlayCanvas is an open-source game engine used by game developers worldwide. Optimization is crucial for HTML5 games, focusing on load times and frame rate. Texture and mesh optimization can significantly reduce download sizes. GLTF and GLB formats offer smaller file sizes and faster parsing times. Compressing game resources and using efficient file formats can improve load times. Framerate optimization and resolution scaling are important for better performance. Managing draw calls and using batching techniques can optimize performance. Browser DevTools, such as Chrome and Firefox, are useful for debugging and profiling. Detecting device performance and optimizing based on specific devices can improve game performance. Apple is making progress with WebGPU implementation. HTML5 games can be shipped to the App Store using Cordova.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
React, TypeScript, and TDD
React Advanced 2021React Advanced 2021
174 min
React, TypeScript, and TDD
Top Content
Featured Workshop
Paul Everitt
Paul Everitt
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
Mastering advanced concepts in TypeScript
React Summit US 2023React Summit US 2023
132 min
Mastering advanced concepts in TypeScript
Top Content
Featured WorkshopFree
Jiri Lojda
Jiri Lojda
TypeScript is not just types and interfaces. Join this workshop to master more advanced features of TypeScript that will make your code bullet-proof. We will cover conditional types and infer notation, template strings and how to map over union types and object/array properties. Each topic will be demonstrated on a sample application that was written with basic types or no types at all and we will together improve the code so you get more familiar with each feature and can bring this new knowledge directly into your projects.
You will learn:- - What are conditional types and infer notation- What are template strings- How to map over union types and object/array properties.
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
Workshop
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
From Todo App to B2B SaaS with Next.js and Clerk
React Summit US 2023React Summit US 2023
153 min
From Todo App to B2B SaaS with Next.js and Clerk
Top Content
WorkshopFree
Dev Agrawal
Dev Agrawal
If you’re like me, you probably have a million side-project ideas, some that could even make you money as a micro SaaS, or could turn out to be the next billion dollar startup. But how do you know which ones? How do you go from an idea into a functioning product that can be put into the hands of paying customers without quitting your job and sinking all of your time and investment into it? How can your solo side-projects compete with applications built by enormous teams and large enterprise companies?
Building rich SaaS products comes with technical challenges like infrastructure, scaling, availability, security, and complicated subsystems like auth and payments. This is why it’s often the already established tech giants who can reasonably build and operate products like that. However, a new generation of devtools are enabling us developers to easily build complete solutions that take advantage of the best cloud infrastructure available, and offer an experience that allows you to rapidly iterate on your ideas for a low cost of $0. They take all the technical challenges of building and operating software products away from you so that you only have to spend your time building the features that your users want, giving you a reasonable chance to compete against the market by staying incredibly agile and responsive to the needs of users.
In this 3 hour workshop you will start with a simple task management application built with React and Next.js and turn it into a scalable and fully functioning SaaS product by integrating a scalable database (PlanetScale), multi-tenant authentication (Clerk), and subscription based payments (Stripe). You will also learn how the principles of agile software development and domain driven design can help you build products quickly and cost-efficiently, and compete with existing solutions.
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)