Nodejs Runtime Performance Tips

Rate this content
Bookmark

Nodejs runs big systems today. Sometimes, you can improve user experience and save on cloud expanses optimizing your nodejs scripts. In this talk I will share tips from production on how to improve nodejs runtime performance.

This talk has been presented at Node Congress 2021, check out the latest edition of this JavaScript Conference.

FAQ

Yonatan Kra is a software architect at Vonage, an egghead instructor, blogger, and a full-time geek who enjoys running.

Yonatan encountered a stuttering microservice in the system due to an unoptimized function that was taking 2-3 seconds to finish instead of milliseconds.

Runtime optimization is important because if a function takes too long to execute, it can block the main thread, causing the application to become unresponsive and delaying other tasks such as API calls and promise resolutions.

Profiling can be used to compare the performance of functions by measuring their execution time and identifying any garbage collection instances. For example, profiling showed that BuildArray1 took around 40 milliseconds, while BuildArray2 took around 7 milliseconds due to differences in garbage collection.

Garbage collection is the process in which JavaScript engines clear the memory from unreferenced objects, thereby freeing up memory resources.

You can use Chrome DevTools' profiler page to profile Node.js applications. By running Node.js in debug mode with the --inspect flag, you can record and analyze the performance of functions and memory usage.

Memory leaks can be identified by profiling memory usage and looking for blue bars in the memory tab, which indicate allocated memory that wasn't garbage collected. To fix memory leaks, ensure that objects are properly dereferenced when they are no longer needed.

BuildArray1 took around 40 milliseconds to run and had many garbage collection instances, while BuildArray2 took around 7 milliseconds and did not have garbage collection instances, making it more efficient.

Developers can use tools like Postman to make API calls and Chrome DevTools to profile the Node.js server, allowing them to track the performance of each function involved in handling the API request.

A gray bar in memory profiling indicates that memory was allocated and subsequently garbage collected, meaning there are no memory leaks for that particular allocation.

Yonatan Kra
Yonatan Kra
9 min
24 Jun, 2021

Comments

Sign in or register to post your comment.
Video Summary and Transcription
This Talk focuses on the importance of runtime optimization in software development. It discusses the impact of unoptimized functions and the role of garbage collection. The Talk also highlights the use of profiling tools to identify and improve performance issues. Additionally, it emphasizes the importance of memory profiling to prevent memory leaks and optimize application performance.

1. Introduction to Runtime Optimization

Short description:

Many years ago, I encountered a critical issue in a big system where a stuttering microservice caused delays of 2-3 seconds. As a software architect at Vonage, I have dedicated years to optimizing runtime techniques. In this part, we will explore how to identify and improve the performance of unoptimized functions, using a simple example. We will also discuss the impact of garbage collection and the importance of runtime optimization.

Many years ago, I was working on a big system, a critical life and death kind of system. The system was working well, until one day I got a call from a customer. Yonatan, he says, the system is not answering my calls. What he was experiencing was a stuttering micro service in the system. There was one unoptimized function in the pipeline that was taking 2-3 seconds to finish instead of milliseconds. For my customer, these 2-3 seconds were critical, especially when looking at a system at scale.

My name is Yonatan Kra, software architect at Vonage. I'm an egghead instructor, blogger, and a full-time geek. I also enjoy running. I have spent years optimizing my runtime techniques, which is important when you need to run away from bullies. Today I'm going to show you how to spot unoptimized functions, and how to improve runtime performance in your applications.

We'll begin by looking at a simple example. We have two functions here that do the same. They both create an array and push elements into the array. BuildArray1 creates an empty array and dynamically pushes the indices into the array. BuildArray2 pre-allocates the array and just sets the right value in the right index. The result is the same. Let's take a look at their profile. In their profile, you can see that BuildArray1 took around 40 milliseconds to run, while BuildArray2 took around 7 milliseconds to run. That's a huge difference for functions that do the same thing. If we look deeper, we'll be able to see that BuildArray1 had lots of gray bars inside and these are garbage collection instances, while BuildArray2 did not have these garbage collection instances. So this is how we use profiling in order to compare performance of different implementations of the same functions, for instance. So we can see if improvements really improved our applications or search for the right solution.

A word about garbage collection. Garbage collection is the process in which JavaScript engines clear the memory from unreferenced object. I already said I'm a runner, and this is a good reason why I like to run on Saturdays because there are no garbage collectors on Saturday. Why should you care, actually, about runtime optimization? This is a caricature of the event loop in OJS. What you should take from this is that the event loop is running your main thread and it's running tasks. Tasks are actually the callbacks, your functions. And if a task takes longer, it blocks the main thread and during this runtime of the function, nothing else is running.

2. Optimizing Functions and Profiling Performance

Short description:

When optimizing your functions, it's important to ensure smooth running and prevent delays that can affect API calls and promises. Profiling tools like Chrome inspect page and dedicated dev tools for node can help identify and improve performance. Memory profiling can also uncover memory leaks and provide solutions. By clearing referencing arrays and monitoring allocations, you can prevent memory leaks and optimize your application's performance.

So if this function is running, no API calls are being taken care of, no promises are being resolved, and your application is just stuck and everything else is waiting. So this is a good reason to optimize your functions and make sure everything is running smoothly.

Let's see an example for that. For this, we'll go to the IDE. Look at this function, something quite noticeable. You should be familiar, it's an array that a million elements are being pushed to and we've added a set interval that makes sure it's being called once a second.

So if we run our node with the dash-dash inspect flag, sorry, it starts node in debug mode and we could go to the Chrome inspect page, open dedicated dev tools for node, and go to the profiler page. This time, we'll start recording for like two, three seconds, finish, and here we go. We have our intervals, one per second, and if you dip dive into it, we actually see our call to something quite noticeable and exactly how long it took the function to run. This way, we can profile everything in our node application. For instance, you can start an API call using Postman and track everything that happens from the moment the API call gets to the server until it just runs out and you can see how long it took every function to run and if you see a long running function, you might want to optimize this function.

Another issue in runtime performance is memory and we'll soon see how to profile memory and maybe even solve memory links. In this time, something quite noticeable, it pushes into an array that is created outside, so on every interval, we just increase this array and we do not garbage collect the internal array we had before. So, let's see how this looks when we profile memory.

So, I start again in debug mode. This time I'll go to the memory tab and I make sure allocation instrumentation on timeline is working. I start recording and I see these blue bars appear. A blue bar means a memory that was allocated and was not garbage collected. And if I focus on one of them, I actually see these 100 specials as expected. On every second 100 specials were allocated and not cleared. If I focus on one of the specials, I can see it was its index in the array, the name of the referencing array, and even the line in the code that allocated this object. I can easily see if I want this object to be garbage collected or not. If I do want it to be garbage collected, then I have a leak.

Let's fix this leak quite easily by just clearing the referencing array on every interval. I'll stop the server, restart it, and let's record again. Now, the blue bar becomes gray. Gray means that we had an allocation here, but it was garbage collected. Again, you can call your API, see if you have a blue bar that becomes gray after the API finish running, and if not, it might have a memory leak in your API handler, for instance.

Usually, when I speak to developers about performance or help developers solve performance issues, I see a lot of confusion in regards to how to handle performance problems. I hope this talk helped you understand the profiling tools that you have, the powerful profiling tools that you have, and how you can help your functions run faster and prevent memory leaks in your applications. I truly hope I piqued your interest to learn more about this subject. I'm very passionate about it, and to enjoy it as much as I do.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.
Optimizing HTML5 Games: 10 Years of Learnings
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Optimizing HTML5 Games: 10 Years of Learnings
Top Content
Watch video: Optimizing HTML5 Games: 10 Years of Learnings
PlayCanvas is an open-source game engine used by game developers worldwide. Optimization is crucial for HTML5 games, focusing on load times and frame rate. Texture and mesh optimization can significantly reduce download sizes. GLTF and GLB formats offer smaller file sizes and faster parsing times. Compressing game resources and using efficient file formats can improve load times. Framerate optimization and resolution scaling are important for better performance. Managing draw calls and using batching techniques can optimize performance. Browser DevTools, such as Chrome and Firefox, are useful for debugging and profiling. Detecting device performance and optimizing based on specific devices can improve game performance. Apple is making progress with WebGPU implementation. HTML5 games can be shipped to the App Store using Cordova.
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Node Congress 2022Node Congress 2022
26 min
It's a Jungle Out There: What's Really Going on Inside Your Node_Modules Folder
Top Content
The talk discusses the importance of supply chain security in the open source ecosystem, highlighting the risks of relying on open source code without proper code review. It explores the trend of supply chain attacks and the need for a new approach to detect and block malicious dependencies. The talk also introduces Socket, a tool that assesses the security of packages and provides automation and analysis to protect against malware and supply chain attacks. It emphasizes the need to prioritize security in software development and offers insights into potential solutions such as realms and Deno's command line flags.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
Featured WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
WorkshopFree
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
Node.js Masterclass
Node Congress 2023Node Congress 2023
109 min
Node.js Masterclass
Top Content
Workshop
Matteo Collina
Matteo Collina
Have you ever struggled with designing and structuring your Node.js applications? Building applications that are well organised, testable and extendable is not always easy. It can often turn out to be a lot more complicated than you expect it to be. In this live event Matteo will show you how he builds Node.js applications from scratch. You’ll learn how he approaches application design, and the philosophies that he applies to create modular, maintainable and effective applications.

Level: intermediate
Build and Deploy a Backend With Fastify & Platformatic
JSNation 2023JSNation 2023
104 min
Build and Deploy a Backend With Fastify & Platformatic
WorkshopFree
Matteo Collina
Matteo Collina
Platformatic allows you to rapidly develop GraphQL and REST APIs with minimal effort. The best part is that it also allows you to unleash the full potential of Node.js and Fastify whenever you need to. You can fully customise a Platformatic application by writing your own additional features and plugins. In the workshop, we’ll cover both our Open Source modules and our Cloud offering:- Platformatic OSS (open-source software) — Tools and libraries for rapidly building robust applications with Node.js (https://oss.platformatic.dev/).- Platformatic Cloud (currently in beta) — Our hosting platform that includes features such as preview apps, built-in metrics and integration with your Git flow (https://platformatic.dev/). 
In this workshop you'll learn how to develop APIs with Fastify and deploy them to the Platformatic Cloud.
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)