The starting point is, of course, the Chrome Memory Profiler, which has three analysis options available. You may experience the heap snapshot and the allocation sampling are the most useful ones. But for two different purposes. The heap snapshot takes a full image of the current memory usage as a given point in time, giving you a breakdown of the types of objects in memory and why are they being retained. Using a heap snapshot, you can look at different object types, their shallow memory usage, and their retained memory usage as we said before.
With any snapshot though, you are missing information about what's changing in time and you're not able to look at transient peaks of memory usage. If you're interested in memory usage peaks instead, you can use the allocation sampling method. You can start and stop it later and after you run it for some time, it provides you a graph, describing how much stuff was allocated during the time and by which functions. This is a really powerful tool for debugging memory usage spikes, but it's less suited for analyzing the total memory footprint of your app at a steady state. And just by using those two tools, you can make really powerful memory usage optimizations.
For example, I can show you how we use them to optimize the update and work. After taking a memory snapshot of the app Flux at a steady state, we can click on the shallow and retained size columns to sort the allocation types per memory usage and hopefully find the biggest offenders and remove them. We can check them both, both retained and shallow, so that we can find both large single allocations and smaller locations that are retaining a large amount of memory. And while looking at that list, there was a thing that stood out immediately.
We have a lot of map objects, like around 10,000, most of them pretty small, but apart from the first one, which was taking up around 84 megabytes of RAM. And that immediately seemed like a big red flag since I didn't know that we had such a large map for something in the app. And in the lower part of the profiler UI, we can see why and who was retaining that large map object, which turned out to be a global variable that we were using for subscribing to events. In fact, every time in our app, a subscription happens, we're using a useState hook to generate and store a string UID, which was later used to remove the subscription on CleanApp. This turned out to be extremely inefficient for memory consumption, as it needed to instantiate the useState hook, store the string, and also keep the closure around that lambda function. And by removing that useState hook call and replacing the map that kept strings just a set data structure, we were able to reduce the memory footprint of subscriptions, which saved up even 50% of RAM in some cases. And this is an example of an obviously memory inefficient data structure, which was easy to find and optimize.
Unfortunately, though, that's not always the case. In fact, it was like one of the very few easy things that we could optimize in our app. If you look at this profile, you can actually see how the remaining majority of memory footprint is actually dominated by objects. And unfortunately, we don't have a single object to optimize anymore, but rather to million instances of smaller objects that are being kept in memory. The Chrome memory profiler doesn't make it easy to understand as it doesn't group objects by type. And there is no way that you can scroll through all of them manually and find out what is going on. Thankfully, though, the Chrome memory profiler can export your heap snapshot to a JSON file. So there is hope for us to perform some statistical analysis on them. Without having to write a lot of code, an importer, and a parser ourselves, there is a fantastic tool to analyze those files called Memlab.
Comments