In Memory of Travails

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Two aspects of resolvers have an outsized influence on their performance: the size of the execution context, and the way we compute their value. In the Node.js implementation of graphql, promises wrapping primitive values are especially disruptive, since they add a large computing overhead. The context size creates a memory usage baseline that can rise very quickly with even small additions to the context, when there are many concurrent contexts. The execution can create temporary objects, increasing memory usage. Often-run resolvers, such as those responsible for filling out large arrays of objects, can become performance bottlenecks.

At Auction.com, our search results page (SRP) requests up to 500 items of roughly 80 fields each. The query resolving these fields was suffering a high latency. We shall examine the tools to instrument our code and identify memory usage and CPU utilization bottlenecks.

Our realtime elements (e.g. realtime updates to the status of currently viewed properties) are implemented using a translation of  kafka messages to graphql updates. We shall present the tools and procedures to reduce memory usage and CPU usage when fanning out such messages.

This talk has been presented at JSNation US 2024, check out the latest edition of this JavaScript Conference.

FAQ

Auction.com experienced frequent restarts in their Kubernetes cluster due to a 'reached heap limit allocation failed' error, indicating that Node.js was out of memory.

Auction.com tested memory consumption by running Kafka, Graph, and a client locally, connecting 4000 WebSockets, and sending messages through the setup to observe performance.

They used JavaScript proxies to create lazy-loaded backends, reducing memory consumption by only creating backend objects when needed.

They replaced Lodash calls with native JavaScript iterators and used memoization to avoid redundant conversions, significantly improving performance.

Auction.com introduced auto-scaling to handle memory spikes and nightly restarts to address a slow memory leak, ensuring the system remained stable.

Upgrading packages like GraphQL Redis subscriptions allowed auction.com to benefit from optimizations and improved performance, though the improvements were minor.

They used Google Chrome DevTools heap snapshots to identify and remove unnecessary memory allocations, such as location objects in GraphQL ASTs.

One key lesson was to carefully manage what is attached to the GraphQL context, ensuring only necessary data is included to optimize memory usage.

They modernized their code by leveraging native JavaScript features for Node.js 20, reducing reliance on polyfills and improving execution efficiency.

Memoizing snake case conversions led to a 3X performance increase, smoothing out message processing and improving overall system responsiveness.

Gabriel Schulhof
Gabriel Schulhof
28 min
21 Nov, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Hello, my name is Gabriel. I work at auction.com and I'm going to be talking about how we improved the performance of GraphQL resolvers for our GraphQL service. We had a problem with our subscription deployment, where we were experiencing a high number of restarts due to memory allocation failures. This prompted us to investigate and optimize the memory consumption of our resolvers. To assess the performance, we set up a local environment with Kafka, Graph, and a client that connected 4000 WebSockets to Graph. After running the test, we found that we were only able to process and distribute 16 messages to our clients. Yay. The memory consumption graph showed peaks and valleys as messages were delivered. Three distinct phases were observed: idle, Sockets connected with no messages, and messages being processed. We decided to optimize the context, which contains request-specific information and backend details. Since subscriptions primarily involve sending Kafka messages, we realized that the message itself often has all the necessary information. Therefore, we only create backends when a call is made to them. We optimized our backend creation process using the proxy object, which allows us to create backends lazily only when they are accessed. This resulted in less memory consumption without changing the code or the schema. The less memory consumption is evident in the second phase of the recording, where the plateau formed from having multiple contexts is significantly lower. Most of the savings were achieved by reducing temporary objects and using native iterators instead of Lodash calls for converting key names to Snakecase. All of a sudden, the performance increased by 18%, resulting in increased memory consumption. Upgrading to newer versions of GraphQL Redis subscriptions did not have a significant impact on memory usage. However, optimizing the conversion of key names to snake case by using memoization improved computational efficiency. Our performance significantly improved after implementing snake case. However, memory consumption remained high. To address the memory leak, we introduced auto-scaling and restarted the service every night. Additionally, we optimized the code generation process to improve memory consumption. We explored using heap snapshots in Google Dev tools to analyze and reduce memory consumption. By identifying unnecessary objects and removing them, we were able to free up memory and improve performance. We patched the location object to improve performance and reduce memory consumption. We also optimized data loaders to avoid N+1 queries and improve efficiency.
Available in Español: En Memoria de las Dificultades

1. Introduction

Short description:

Hello, my name is Gabriel. I work at auction.com and I'm going to be talking about how we improved the performance of GraphQL resolvers for our GraphQL service. We had a problem with our subscription deployment, where we were experiencing a high number of restarts due to memory allocation failures. This prompted us to investigate and optimize the memory consumption of our resolvers. To assess the performance, we set up a local environment with Kafka, Graph, and a client that connected 4000 WebSockets to Graph. After running the test, we found that we were only able to process and distribute 16 messages to our clients.

Hello, my name is Gabriel. I work at auction.com and I'm going to be talking about how we improved the performance of GraphQL resolvers for our GraphQL service.

So the problem that we found was that in the case of our subscription deployment, we were getting a lot of restarts. So this is the number of restarts and as you can see, we're on the order of like 200, 300 restarts in our Kubernetes cluster and all the restarts were happening because of this magic error message that said reached heap limit allocation failed. And this is basically the JavaScript engine telling Node.js that it's out of memory. Node.js dies and game over. You have to restart the pod. And so we started looking into how we can improve the memory consumption of our resolvers.

So then let's see a little bit of background. How do we GraphQL at auction.com, right? So we use for our subscriptions, we use Kafka topics and GraphQL subscription, Redis for PubSub. So in comes Kafka message, it goes into Redis, it comes out of Redis where it needs to. And then it goes off to the clients over WebSockets.

So to do the test for memory consumption and performance in general, what I did was I ran Kafka locally, I ran Graph locally and I ran a client locally that would produce like large number of WebSockets and connect them to Graph. So I connected 4000 WebSockets and I used KCAT to basically send messages to the local Kafka broker, which sent them to Graph, which sent them to Redis, which sent them to those 4000 Sockets. So that was the setup. And this was the initial result in, what is it? Let's see 250 seconds, that's four minutes and 10 seconds, I guess. Yeah, something like that. We were able to process and distribute to our clients a whopping 16 messages.

2. Optimizing the Context

Short description:

Yay. The memory consumption graph showed peaks and valleys as messages were delivered. Three distinct phases were observed: idle, Sockets connected with no messages, and messages being processed. We decided to optimize the context, which contains request-specific information and backend details. Since subscriptions primarily involve sending Kafka messages, we realized that the message itself often has all the necessary information. Therefore, we only create backends when a call is made to them.

Yay. So as you can imagine, that's not exactly stellar performance and you can see the memory consumption here. It has all these like peaks and valleys, peaks and valleys. So you can see that for any given message, it would allocate a whole bunch of memory and then finally deliver the message and then do that 15 more times. And you can see down the green little peaks at the bottom where the X axis is, that's when messages were actually delivered.

A little bit more about the experimental setup. So you can see that there are like three distinct phases to the memory consumption graph. One of them is when the whole process is just idle. That's just, you know, so you can have like a baseline of memory consumption. And then the second one is where we have 4000 Sockets connected, but there's absolutely no messages. So there's zero traffic, just the Sockets. And then finally, the last phase of the graph is where you have the messages coming in and the graph attempting to process them.

One of the things that we immediately thought about doing was attacking the context, so to speak. So the context, as you may or may not know, is the thing that every GraphQL request or subscription or what have you has in order for it to execute in a request specific way. So anything that is specific to that request, such as user credentials, the request itself, like what is it that the user wanted, all of that is attached to the context. Right. And in our case, since we might be accessing things behind the graph server, like backends, we have those things attached to the context as well. Right. And each backend has like, you know, five different methods, you know, get, put, post, delete and patch. And so for each of those backends and for each of those methods, we had a little wrapper that would, you know, encapsulate the backend specific details of that backend. And so, you know, things like URL and so forth. So you could just, you know, call the backend, call the method, you know, and just get the data without having to put the things like the server or the domain in like a million places in your code. And put it in one place and that's it. So if we change the backend, we could just change it in one place. But the problem is this all requires the wrapper objects to store on the context. And so we were like, okay, well, subscriptions, they hardly make any backend calls because they are really just Kafka messages being sent out. There's already all the information that people might want in the message itself. They very rarely access backends. And so, and even in the case of like our request response service, you know, you're not going to need like 40 backends for every single request, right? So we were like, okay, so let's keep the syntax that we have for accessing our backends, but let's not create the backends unless somebody actually attempts to make a call to those backends.

3. Optimizing Backend Creation

Short description:

We optimized our backend creation process using the proxy object, which allows us to create backends lazily only when they are accessed. This resulted in less memory consumption without changing the code or the schema.

And so, and even in the case of like our request response service, you know, you're not going to need like 40 backends for every single request, right? So we were like, okay, so let's keep the syntax that we have for accessing our backends, but let's not create the backends unless somebody actually attempts to make a call to those backends.

And so we use this wonderful recently developed, well, not so recently, but relatively recently developed JavaScript object called the proxy, right? And so we created all our backends lazily now, meaning that the context pretends that all the backends exist, but it doesn't actually create the wrapper for any given backend until somebody attempts to retrieve that property. And so the good thing was that we could keep the code the way it was for all our backends and for our entire schema. We didn't have to change any of the result, but all of a sudden there was less memory consumption.

4. Memory Consumption and Kafka Message Conversion

Short description:

The less memory consumption is evident in the second phase of the recording, where the plateau formed from having multiple contexts is significantly lower. Most of the savings were achieved by reducing temporary objects and using native iterators instead of Lodash calls for converting key names to Snakecase. The performance degradation was minimal, with a slight decrease in the number of messages processed, but the bottom line was reduced memory usage.

So here you go. This is the less memory consumption, as you can see. I'll point your attention to the second phase of this recording, which is the idle plus 4k sockets. You can see that because we have fewer backends created. Now this plateau that forms from having all those contexts around is much lower. So this is actually just to give you a sneak preview of the rest of the presentation.

This is actually where most of the savings ended up being because measuring how much you can save while there's all that activity going on of receiving incoming messages, parsing them, constructing the response, etc. It's much harder to gauge what it is that you can shave off that process because there's so many temporary objects going on. And so, okay, our performance degraded ever so slightly. We went from 16 messages to 13 messages, but that's hardly significant. But the bottom line is we were using less memory. So we were hoping for fewer restarts, but it was still in its infancy, this kind of effort.

What else did we do? Well, this is a little bit of insight into what we do with those Kafka messages. On the left-hand side, you can see the Kafka message. This is how we receive it. And the only thing we have to do in order to send it out and completely resolve the subscription is to just convert all of those key names to Snakecase. That's it. And so our implementation for doing that looked on like it does on the left-hand side here. We had a lot of like Lodash calls to accomplish this. And this is back from like historic days when the JavaScript engine didn't have all those wonderful iterators over objects and over arrays. And so Lodash is great and all, and it did serve its purpose. But I figured it might be faster if we just let the engine do its thing and use those funky new just-in-time compiled iterators for arrays and for objects. And so I converted the code. I kept some of the Lodash stuff, but most of it is just like using the native and standard methods now, the iterators. And so, oops, I went back a little bit. Yeah, so this was the result. Okay, so the plateau is still there to some extent. The memory consumption went back up, but that's probably because so did the performance. All of a sudden it went up by 18%.

5. Optimizing Performance and Key Conversion

Short description:

All of a sudden, the performance increased by 18%, resulting in increased memory consumption. Upgrading to newer versions of GraphQL Redis subscriptions did not have a significant impact on memory usage. However, optimizing the conversion of key names to snake case by using memoization improved computational efficiency.

All of a sudden it went up by 18%. And so, you know, you've got 19 messages now, so you're dealing with more stuff and so potentially more concurrency. And so you can actually end up increasing your memory consumption, even though your performance has gone up. Goes to show you that memory consumption and the CPU utilization are not exactly orthogonal or even exactly inversely proportional. They have a very complex relationship.

The next step, I figured, okay, so let's see if we can upgrade some of the packages. And one of the packages we have is GraphQL Redis subscriptions, which basically you can see from this diagram takes the stuff from Kafka. And this also kind of reveals how we set up our graph service. You can see that each, like all the GraphQL pods, G1, G2 and G3, they're all part of the same consumer group. So Kafka basically partitions the messages and sends a subset of them to G1, a subset of them to G2 and a subset of them to G3. And so that's why we need Redis. Because like, okay, what if somebody is subscribed to G1, let's say C1 is subscribed to G1, but I'm sorry, actually a concrete example would be C2 is subscribed to G1, right? But G1 never gets the green message, right? And so how is G1 going to tell C2 that, you know, yo, green message? And so that's what we need Redis for. Redis will tell, will sort of broadcast the message to all the relevant graph services.

So then the next thing I did was, okay, we have this thing, we have this function that I showed you before, convert the object to snake case, which basically just takes all the, what do you call that? The key names on the incoming Kafka messages and just converts them. I was like, you know, what are those key names? Right. Like we're running a business. So, you know, we have data that has that are key value pairs, you know, and the names of the keys are things like, you know, property ID, listing ID, primary property ID, property photo ID and that kind of stuff. So those words, property ID and listing ID and so forth, we have to keep converting those to snake case in order to send out their messages. But I'm like, well, we only have maybe like, I don't know, at most like 300 of those words, right? And after you've converted like the word listing ID to snake case, why would you expend the computational power to do it again when you've already done it, right? So long story short, let's memoize this snake case. So that all the keys that we ever attached to objects, you know, if we've already computed the snake caseification, so to speak, of a key like listing ID, now let's just grab it from the cache. And lo and behold, lodash, old and venerable as it may be, came in useful with the memoize function.

6. Optimizing Key Name Conversion

Short description:

We decided to optimize the conversion of key names to snake case by implementing memoization using the lodash memoize function.

So we're good to go. So then the next thing I did was, okay, so we have this thing, we have this function that I showed you before, convert the object to snake case, which basically just takes all the, what do you call that? The key names on the incoming Kafka messages and just converts them. I was like, you know, what are those key names? Right. Like we're running a business. So, you know, we have data that has that are key value pairs, you know, and the names of the keys are things like, you know, property ID, listing ID, primary property ID, property photo ID and that kind of stuff. So those words, property ID and listing ID and so forth, we have to keep converting those to snake case in order to send out their messages. But I'm like, well, we only have maybe like, I don't know, at most like 300 of those words, right? And after you've converted like the word listing ID to snake case, why would you expend the computational power to do it again when you've already done it, right? So long story short, let's memoize this snake case. So that all the keys that we ever attached to objects, you know, if we've already computed the snake caseification, so to speak, of a key like listing ID, now let's just grab it from the cache. And lo and behold, lodash, old and venerable as it may be, came in useful with the memoize function.

7. Improving Performance and Memory Consumption

Short description:

Our performance significantly improved after implementing snake case. However, memory consumption remained high. To address the memory leak, we introduced auto-scaling and restarted the service every night. Additionally, we optimized the code generation process to improve memory consumption.

And drumroll please, our performance went up from like 18% up to like 3X just because we started using snake. So of course, memory consumption, not so much, we're still kind of high, but at least we've got good performance now, right? And you can see the graph, the last portion of the graph where we're processing messages is a lot smoother now, like you don't have these like hiccups that you had before, because now you're actually like processing messages.

So, okay, we got performance, memory consumption, not so much, you know, the quest continues. So then, one of the things that we did was completely nothing to do with GraphQL, right? We just looked at the service and we said, well, you know, we've got a slow memory leak if we leave the service up and running for very long, we've got these memory spikes which end up killing our pods. So what to do, what to do? Well, why don't we introduce auto-scaling, right? So, because we hadn't at that point, we just had, I think like 25 pods, which is a lot and still they were restarting.

But if there was any incoming spike, they wouldn't react at all. They would just die. And so I figured, okay, well, we have kind of an early indication that there's going to be an incoming spike because it does build up over like several minutes. So I'm like, okay, let's try and see if we can add like an HPA to it, which will start new pods when the memory consumption starts going up. And so this is us doing that. And for the memory leak, we were like, okay, well, we're just going to, you know, sledgehammer the whole thing and just restart it every night. Because why not? You know, it's not really going to hurt our clients if we restart at 2 a.m. And besides our client software. So the other side of these subscriptions is configured to like reconnect like a few seconds later.

So they're not going to miss much. And there's no auctions going on of any kind at like 2 a.m. So, you know, they're not going to miss like, you know, the closing of an auction or they're not going to miss being able to tell if somebody bid or something like that. I mean our customers. So we should be okay to restart at 2 a.m. So that's how we sort of quote unquote addressed the memory leak. All right, back to trying to save memory. Right. So that we don't have memory consumption problems. One of the things that we had again historically is that the code that we were generating was sort of. It was like made for the browser essentially because this GraphQL service evolved from like a service that would serve like front end components. Right. And so we have this like convert TypeScript to JavaScript step followed by like a package with Babel step. And that step was configured for the lowest common denominator for browser. Right. And so it couldn't it wouldn't do things like async await like native async await.

8. Optimizing Code Generation and Memory Consumption

Short description:

We optimized code generation to support the native JavaScript engine, resulting in improved performance. Additionally, we changed the code for converting snake case to iterate and mutate arrays in place, reducing memory consumption and improving performance.

A lot of the iterators it would just re-implement and polyfill. And so I was like well but wait a minute we're not working with like every browser. We don't need lowest common denominator. We know exactly what we're working with and that's node 20. Right. So I figured okay let's let's add a little let's add a fork into our code generation that says are you interested in running on node 20. Well gee flipping whiz. Then maybe we should generate proper code that has all the support of the native JavaScript engine. And we did that and when we did performance went up let's see from 3.3 to 3.6 so it was good. You can see that the curve is now even smoother than it was before. So before it had like these tall peaks and valleys but now it's a lot smoother. Right. So so you know it's a it's a bit of an improvement. But unfortunately the height of the curve is still pretty much the same. Right. So memory consumption again with the performance and the memory consumption in this case being orthogonal unfortunately. But now we're at least a little bit better and the code is a lot cleaner and you know we're modernizing.

All right. So let's see what else can we do. So with convert obviously snake case I went back and I did another iteration on it to improve it some more in the hopes that you know we would create fewer temporary objects etc. etc. And so I figured you know we do like whenever you call like dot map you're essentially creating a new object right and you're throwing away the old object. Why though in an array you can you can just iterate over the existing array and there is there's we don't have any issues with like re-entrancy. So if you mutate the array in place nobody's going to complain. Right. So so that's I change the code to basically go from map to like a for loop iterating or mutating the array in place instead of creating a new array for objects. It's a little harder because it is the key that you're modifying in the case of an array. You know the indices 0 1 2 3 you're not changing those so you can modify in place but the object it's I mean you could but it's it's a lot more error prone. So I just kept that as it was you know create a new object and attach it. And lo and behold the memory consumption is much smoother now a lot fewer peaks and valleys and the performance is crazy up it went from 3x and by the way the performance is relative to the first graph that I showed you.

9. Using Heap Snapshots for Memory Optimization

Short description:

We explored using heap snapshots in Google Dev tools to analyze and reduce memory consumption. By identifying unnecessary objects and removing them, we were able to free up memory and improve performance.

So we're up to like 6x the performance that we had at the beginning of this journey. Right. But again with the memory consumption as you can see the height of that last portion is pretty much the same. It's a lot smoother but it's the height is the same. So all right right. Let's try and and really really really like push this down now. Right. And so what can we do.

One of the things we can do is use Google Dev tools from Google Chrome to take these things called heap snapshots and these things are amazing. Like I love these heap snapshots not because they tell you how many objects are allocated at the time it was taken but because you can take two heap snapshots at different points in the execution and you can compare them and it'll actually diff the objects by like object type. And so that's amazing because you can tell like you know I went from being completely idle to attaching these 4000 sockets and all of a sudden there's all these objects there. I mean do I really need them. Right. So like you can see like for example here we have this like the location we have plus 261 thousand nine hundred and sixty eight of these location objects. Like what is a location object. Why is it there. What is it doing. And I looked at it and it's like oh this tells me what row and what column this token is on. And I'm like well I'm not writing a syntax highlighting app. I am doing GraphQL execution so I don't care about location. So let's get rid of location. See if we can see if that'll save us some memory. Okay how do we do that. Well if you look at if you look at GraphQL you know when you construct the AST from the incoming query you have this option called no location which says construct the AST as you would before but don't attach any location data. I'm like oh yes let's go attach it or use this option immediately. And so that's exactly what we did.

I first I edited this I created a patch for like subscription transport WS which is what we use. Yes yes I know it's an old package. But we have good reasons for using it it's because we mess around with the actual WebSockets protocol to do better keep alive and that kind of thing. So we're kind of stuck with that but it works for us.

10. Optimizing Location Object and Data Loaders

Short description:

We patched the location object to improve performance and reduce memory consumption. We also optimized data loaders to avoid N+1 queries and improve efficiency.

Anyway so so I just basically patched it to say like no location true. Like oh yeah. And then what happened. So performance stayed about the same. And now let's see if I can do a quick animation here. There it is. And there it is. So as you can see the memory consumption in the third phase went down just a tad bit there. So you know it kind of did its job it helped us out a little bit. Let's see if there's anything else we can shave so to speak.

The another thing that we have I went back to the context I'm like anything else that's nice and juicy that we can like shave off there. Well in addition to two back ends where we call things directly you also have these things called data loaders which are also part of GraphQL and they basically allow you to avoid N plus one queries by sort of accumulating individual pieces of information that you need. So for example let's say somebody let's let's let's say you have to retrieve property ID separately from listing ID. One one listing could be several properties right. And so for each of those properties you would normally have to call an endpoint saying you'll give me the details of this property. And so you would call that back end as many times as you're as you have listing IDs. But if you if that service provides another endpoint that allows you to retrieve all the property IDs for for any given listing ID or you know yeah basically then you would you would use a data loader to accumulate all these property IDs that the user is asking for and make only one request to the back end instead of N requests to the back end and then you take the responses and you sort of spread them out over over the GraphQL response. And so we have a lot of these data loaders we actually have more data loaders than we have back ends because you know for any given back end you can you can have a data loader for for for deep two different endpoints. And we also have these things called primers which which will do the hard work of of retrieving or of setting up the data loader for for for retrieving the data if in case the data is not cached. So but all of these things are attached to the context and they were at this point being attached unconditionally so all right let's let's roll out the proxies again and see if we can you know clean this up and make this lazy and I did.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
How React Compiler Performs on Real Code
React Advanced 2024React Advanced 2024
31 min
How React Compiler Performs on Real Code
Top Content
I'm Nadia, a developer experienced in performance, re-renders, and React. The React team released the React compiler, which eliminates the need for memoization. The compiler optimizes code by automatically memoizing components, props, and hook dependencies. It shows promise in managing changing references and improving performance. Real app testing and synthetic examples have been used to evaluate its effectiveness. The impact on initial load performance is minimal, but further investigation is needed for interactions performance. The React query library simplifies data fetching and caching. The compiler has limitations and may not catch every re-render, especially with external libraries. Enabling the compiler can improve performance but manual memorization is still necessary for optimal results. There are risks of overreliance and messy code, but the compiler can be used file by file or folder by folder with thorough testing. Practice makes incredible cats. Thank you, Nadia!
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.
Optimizing HTML5 Games: 10 Years of Learnings
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Optimizing HTML5 Games: 10 Years of Learnings
Top Content
Watch video: Optimizing HTML5 Games: 10 Years of Learnings
PlayCanvas is an open-source game engine used by game developers worldwide. Optimization is crucial for HTML5 games, focusing on load times and frame rate. Texture and mesh optimization can significantly reduce download sizes. GLTF and GLB formats offer smaller file sizes and faster parsing times. Compressing game resources and using efficient file formats can improve load times. Framerate optimization and resolution scaling are important for better performance. Managing draw calls and using batching techniques can optimize performance. Browser DevTools, such as Chrome and Firefox, are useful for debugging and profiling. Detecting device performance and optimizing based on specific devices can improve game performance. Apple is making progress with WebGPU implementation. HTML5 games can be shipped to the App Store using Cordova.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
Workshop
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
Building a Shopify App with React & Node
React Summit Remote Edition 2021React Summit Remote Edition 2021
87 min
Building a Shopify App with React & Node
Top Content
Workshop
Jennifer Gray
Hanna Chen
2 authors
Shopify merchants have a diverse set of needs, and developers have a unique opportunity to meet those needs building apps. Building an app can be tough work but Shopify has created a set of tools and resources to help you build out a seamless app experience as quickly as possible. Get hands on experience building an embedded Shopify app using the Shopify App CLI, Polaris and Shopify App Bridge.We’ll show you how to create an app that accesses information from a development store and can run in your local environment.
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Build a chat room with Appwrite and React
JSNation 2022JSNation 2022
41 min
Build a chat room with Appwrite and React
Workshop
Wess Cope
Wess Cope
API's/Backends are difficult and we need websockets. You will be using VS Code as your editor, Parcel.js, Chakra-ui, React, React Icons, and Appwrite. By the end of this workshop, you will have the knowledge to build a real-time app using Appwrite and zero API development. Follow along and you'll have an awesome chat app to show off!
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.