So if this function is running, no API calls are being taken care of, no promises are being resolved, and your application is just stuck and everything else is waiting. So this is a good reason to optimize your functions and make sure everything is running smoothly.
Let's see an example for that. For this, we'll go to the IDE. Look at this function, something quite noticeable. You should be familiar, it's an array that a million elements are being pushed to and we've added a set interval that makes sure it's being called once a second.
So if we run our node with the dash-dash inspect flag, sorry, it starts node in debug mode and we could go to the Chrome inspect page, open dedicated dev tools for node, and go to the profiler page. This time, we'll start recording for like two, three seconds, finish, and here we go. We have our intervals, one per second, and if you dip dive into it, we actually see our call to something quite noticeable and exactly how long it took the function to run. This way, we can profile everything in our node application. For instance, you can start an API call using Postman and track everything that happens from the moment the API call gets to the server until it just runs out and you can see how long it took every function to run and if you see a long running function, you might want to optimize this function.
Another issue in runtime performance is memory and we'll soon see how to profile memory and maybe even solve memory links. In this time, something quite noticeable, it pushes into an array that is created outside, so on every interval, we just increase this array and we do not garbage collect the internal array we had before. So, let's see how this looks when we profile memory.
So, I start again in debug mode. This time I'll go to the memory tab and I make sure allocation instrumentation on timeline is working. I start recording and I see these blue bars appear. A blue bar means a memory that was allocated and was not garbage collected. And if I focus on one of them, I actually see these 100 specials as expected. On every second 100 specials were allocated and not cleared. If I focus on one of the specials, I can see it was its index in the array, the name of the referencing array, and even the line in the code that allocated this object. I can easily see if I want this object to be garbage collected or not. If I do want it to be garbage collected, then I have a leak.
Let's fix this leak quite easily by just clearing the referencing array on every interval. I'll stop the server, restart it, and let's record again. Now, the blue bar becomes gray. Gray means that we had an allocation here, but it was garbage collected. Again, you can call your API, see if you have a blue bar that becomes gray after the API finish running, and if not, it might have a memory leak in your API handler, for instance.
Usually, when I speak to developers about performance or help developers solve performance issues, I see a lot of confusion in regards to how to handle performance problems. I hope this talk helped you understand the profiling tools that you have, the powerful profiling tools that you have, and how you can help your functions run faster and prevent memory leaks in your applications. I truly hope I piqued your interest to learn more about this subject. I'm very passionate about it, and to enjoy it as much as I do.
Comments