Practical Web App Performance Problem Solving

Rate this content
Bookmark

In this talk we will learn how to solve performance issues. We will learn how the JS engine works, see use cases from production and come out with practical tips that can help you to boost your app's performance 90%!

This talk has been presented at TestJS Summit - January, 2021, check out the latest edition of this JavaScript Conference.

FAQ

Runtime performance refers to how efficiently a program runs during its execution. It involves profiling and optimizing functions to reduce their execution time and improve overall efficiency.

Garbage collection in JavaScript is a process where the JavaScript engine automatically identifies and removes objects that are no longer needed from memory to free up resources.

Profiling helps identify the parts of a program that are using the most resources or taking the longest to execute. By analyzing these areas, developers can make targeted optimizations to enhance performance.

The buildArray function iterates and pushes items to an array, causing frequent garbage collections and slower performance. In contrast, buildArray2 pre-allocates the array and then iterates, resulting in fewer garbage collections and faster execution.

Optimizing tasks within the event loop is crucial because it prevents the main thread from being blocked, allowing the application to remain responsive. This ensures that user interactions and animations run smoothly without delays.

Using setInterval in JavaScript can add recurring tasks to the event loop, potentially leading to performance bottlenecks if not managed properly. It's important to ensure these tasks do not hinder the main thread's responsiveness.

Profiling in the browser can be done using built-in developer tools. Developers can record performance over a period, analyze task execution, and identify bottlenecks in the flame chart to optimize runtime performance.

Node.js applications can be profiled using the Chrome DevTools by starting the application with the --inspect flag, then using the profiler tab to record and analyze performance metrics.

In the example with the 3D visualizer app, optimizing runtime performance dramatically reduced the scripting time from 50% to 2%. This optimization prevented the UI from being blocked, enhancing user interaction and responsiveness.

Yonatan Kra
Yonatan Kra
8 min
15 Jun, 2021

Comments

Sign in or register to post your comment.
Video Summary and Transcription
This Talk discusses runtime performance in software development. It explores the concept of garbage collection and the importance of optimizing functions for faster execution. The event loop and profiling are also highlighted as essential tools for analyzing and improving runtime performance. Real-life examples demonstrate the benefits of profiling in optimizing functions and improving app performance. Overall, the Talk emphasizes the significance of understanding and optimizing runtime performance in software development.

1. Introduction to Runtime Performance

Short description:

Hi, I'm Jonathan, a software architect at Vonage. Today we'll be talking about runtime performance. Garbage collection is the process in JavaScript where unnecessary objects are removed from memory. Let's compare two functions, build Array and build Array2, to see the difference in their runtime performance. Optimizing functions for faster execution is crucial.

Hi, I'm Jonathan, and I'm a software architect at Vonage. I'm also a runner. This is me winning a half marathon, and this is relevant because today we'll be talking about runtime performance. So here's the proof that you can take my word on it.

What is runtime performance? Let's see through an example, garbage collection. Garbage collection is the process in JavaScript in which JavaScript takes objects that are not needed anymore and removes them from memory. That's in one sentence. What can be the problem with that? Let's see that.

We have two arrays here, two functions. One is build Array, which creates an array and iterates n times and pushes items to an array. This is build Array2. It pre-allocates the array and then iterates n times and puts the same items into the same invoices. Two functions doing similar things, but let's see if they differ in something. Here we can see the profiling of these two functions. The build Array took longer to run than build Array2, and we can actually use this profile to see why. If we go deeper, we can see that in build Array, we had like 1,250 recurrences of minor garbage collection. If we look at build Array2, we see it's around 200. This is a big difference, and this is in essence runtime performance, profiling and optimizing functions to take less time to run. Why is it important?

2. Understanding the Event Loop and Profiling

Short description:

The event loop is crucial for running the main thread smoothly in both the browser and Node.js. Profiling applications in the browser allows us to analyze runtime performance and optimize tasks. The same can be done in Node.js using the Chrome inspect page. Real-life examples demonstrate how profiling helped optimize functions and improve app performance. To summarize, profiling is essential for optimizing runtime performance, and there are plenty of resources available to learn more.

The event loop is what's running our main thread. This is where the code of our application is running. If it is blocked, then our code is not running, the other codes that need to run, for instance, on server side in API response, or in the browser, a user can't click anything, or animations will be stuck.

So this is in the browser, and this is in Node.js. And, again, the important thing to take from here is that you want the tasks to be as optimized as possible, and let's see how we can see the tasks, and how we can optimize them.

So this is a function, something quite noticeable, it should be familiar for you, instead of N we have a million, it creates an array of million elements. But we have the set interval. Set interval is a timer, and a timer is one of the things that add tasks to the event loop. So we can see that every second, something quite noticeable will be aided to the event loop and will be ran as a task. Let's see it in a demo. This is our function here. It's running in the browser, we go to the performance tab, and we hit record. So we can record for around five seconds, so we should have around five repeats of this function. And we can see these bumps here. Okay? We can actually see these bumps here, and if we just look a bit... So we can actually see this in the flame chart. These bumps are coming every second. This is our set interval. And we can see that it adds a task on every time, and the task is something quite noticeable. So we can actually see everything that happens during the runtime and analyze it for optimization. We have a summary tab that shows us, for instance, if we look at the whole runtime, it shows us how long our app was busy scripting versus idle. Or we can look at the cold tree, for instance. Let's look at one task and see what happened during this task, or we can look at the whole recording and look along all the calls for something quite noticeable too, and here we can see some minor GC. So this is the gist of profiling applications in the browser. Let's see how you can do this in Node.js. In Node.js you have the Chrome inspect page, and you have to start your application with a dash-dash inspect flag. The app is running, and you start the open dedicated DevTools for Node, go to the profiler tab, start profiling, let's profile for around five seconds again, we stop profiling, and we see our bumps here again. Again, it's the same as it is in the browser. If you know how to optimise in the browser, you can do it in Ogs, and vice versa. How can this help you in real life? Let's see a real-life example. In an app we built, we used Seasium, which is a 3D visualiser of the globe, and we had to put a lot of entities on this map, and this caused the UI to get stuck, so we profiled and we found out that two functions took a long time to run every frame. This is the updates of the label and the billboard, and we investigated these functions and we found out that if we add an update once dirty flag to the entities, only when we update them, we can optimise it so entities that did not get an update won't be processed by these functions. And the results are that from 50 per cent of the time scripting, we went down to two per cent of the time scripting and the app was saved and people could interact with it so the main thread was not blocked. To summarise, we saw the event loop and how it manages our main thread, so we don't want to block it. I can't stretch enough the importance of profiling while optimising your runtime performance, and I really like you to try it, learn it, and enjoy it. There's a lot to read about it. You can read it in my blog, you can read it in the Google Web Dev blog, and lots of stuff around it on the internet.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.
Optimizing HTML5 Games: 10 Years of Learnings
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Optimizing HTML5 Games: 10 Years of Learnings
Top Content
Watch video: Optimizing HTML5 Games: 10 Years of Learnings
PlayCanvas is an open-source game engine used by game developers worldwide. Optimization is crucial for HTML5 games, focusing on load times and frame rate. Texture and mesh optimization can significantly reduce download sizes. GLTF and GLB formats offer smaller file sizes and faster parsing times. Compressing game resources and using efficient file formats can improve load times. Framerate optimization and resolution scaling are important for better performance. Managing draw calls and using batching techniques can optimize performance. Browser DevTools, such as Chrome and Firefox, are useful for debugging and profiling. Detecting device performance and optimizing based on specific devices can improve game performance. Apple is making progress with WebGPU implementation. HTML5 games can be shipped to the App Store using Cordova.
How React Compiler Performs on Real Code
React Advanced 2024React Advanced 2024
31 min
How React Compiler Performs on Real Code
Top Content
I'm Nadia, a developer experienced in performance, re-renders, and React. The React team released the React compiler, which eliminates the need for memoization. The compiler optimizes code by automatically memoizing components, props, and hook dependencies. It shows promise in managing changing references and improving performance. Real app testing and synthetic examples have been used to evaluate its effectiveness. The impact on initial load performance is minimal, but further investigation is needed for interactions performance. The React query library simplifies data fetching and caching. The compiler has limitations and may not catch every re-render, especially with external libraries. Enabling the compiler can improve performance but manual memorization is still necessary for optimal results. There are risks of overreliance and messy code, but the compiler can be used file by file or folder by folder with thorough testing. Practice makes incredible cats. Thank you, Nadia!

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
Featured WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
WorkshopFree
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
High-performance Next.js
React Summit 2022React Summit 2022
50 min
High-performance Next.js
Workshop
Michele Riva
Michele Riva
Next.js is a compelling framework that makes many tasks effortless by providing many out-of-the-box solutions. But as soon as our app needs to scale, it is essential to maintain high performance without compromising maintenance and server costs. In this workshop, we will see how to analyze Next.js performances, resources usage, how to scale it, and how to make the right decisions while writing the application architecture.
Maximize App Performance by Optimizing Web Fonts
Vue.js London 2023Vue.js London 2023
49 min
Maximize App Performance by Optimizing Web Fonts
WorkshopFree
Lazar Nikolov
Lazar Nikolov
You've just landed on a web page and you try to click a certain element, but just before you do, an ad loads on top of it and you end up clicking that thing instead.
That…that’s a layout shift. Everyone, developers and users alike, know that layout shifts are bad. And the later they happen, the more disruptive they are to users. In this workshop we're going to look into how web fonts cause layout shifts and explore a few strategies of loading web fonts without causing big layout shifts.
Table of Contents:What’s CLS and how it’s calculated?How fonts can cause CLS?Font loading strategies for minimizing CLSRecap and conclusion