The event loop is what's running our main thread. This is where the code of our application is running. If it is blocked, then our code is not running, the other codes that need to run, for instance, on server side in API response, or in the browser, a user can't click anything, or animations will be stuck.
So this is in the browser, and this is in Node.js. And, again, the important thing to take from here is that you want the tasks to be as optimized as possible, and let's see how we can see the tasks, and how we can optimize them.
So this is a function, something quite noticeable, it should be familiar for you, instead of N we have a million, it creates an array of million elements. But we have the set interval. Set interval is a timer, and a timer is one of the things that add tasks to the event loop. So we can see that every second, something quite noticeable will be aided to the event loop and will be ran as a task. Let's see it in a demo. This is our function here. It's running in the browser, we go to the performance tab, and we hit record. So we can record for around five seconds, so we should have around five repeats of this function. And we can see these bumps here. Okay? We can actually see these bumps here, and if we just look a bit... So we can actually see this in the flame chart. These bumps are coming every second. This is our set interval. And we can see that it adds a task on every time, and the task is something quite noticeable. So we can actually see everything that happens during the runtime and analyze it for optimization. We have a summary tab that shows us, for instance, if we look at the whole runtime, it shows us how long our app was busy scripting versus idle. Or we can look at the cold tree, for instance. Let's look at one task and see what happened during this task, or we can look at the whole recording and look along all the calls for something quite noticeable too, and here we can see some minor GC. So this is the gist of profiling applications in the browser. Let's see how you can do this in Node.js. In Node.js you have the Chrome inspect page, and you have to start your application with a dash-dash inspect flag. The app is running, and you start the open dedicated DevTools for Node, go to the profiler tab, start profiling, let's profile for around five seconds again, we stop profiling, and we see our bumps here again. Again, it's the same as it is in the browser. If you know how to optimise in the browser, you can do it in Ogs, and vice versa. How can this help you in real life? Let's see a real-life example. In an app we built, we used Seasium, which is a 3D visualiser of the globe, and we had to put a lot of entities on this map, and this caused the UI to get stuck, so we profiled and we found out that two functions took a long time to run every frame. This is the updates of the label and the billboard, and we investigated these functions and we found out that if we add an update once dirty flag to the entities, only when we update them, we can optimise it so entities that did not get an update won't be processed by these functions. And the results are that from 50 per cent of the time scripting, we went down to two per cent of the time scripting and the app was saved and people could interact with it so the main thread was not blocked. To summarise, we saw the event loop and how it manages our main thread, so we don't want to block it. I can't stretch enough the importance of profiling while optimising your runtime performance, and I really like you to try it, learn it, and enjoy it. There's a lot to read about it. You can read it in my blog, you can read it in the Google Web Dev blog, and lots of stuff around it on the internet.
Comments