All of a sudden it went up by 18%. And so, you know, you've got 19 messages now, so you're dealing with more stuff and so potentially more concurrency. And so you can actually end up increasing your memory consumption, even though your performance has gone up. Goes to show you that memory consumption and the CPU utilization are not exactly orthogonal or even exactly inversely proportional. They have a very complex relationship.
The next step, I figured, okay, so let's see if we can upgrade some of the packages. And one of the packages we have is GraphQL Redis subscriptions, which basically you can see from this diagram takes the stuff from Kafka. And this also kind of reveals how we set up our graph service. You can see that each, like all the GraphQL pods, G1, G2 and G3, they're all part of the same consumer group. So Kafka basically partitions the messages and sends a subset of them to G1, a subset of them to G2 and a subset of them to G3. And so that's why we need Redis. Because like, okay, what if somebody is subscribed to G1, let's say C1 is subscribed to G1, but I'm sorry, actually a concrete example would be C2 is subscribed to G1, right? But G1 never gets the green message, right? And so how is G1 going to tell C2 that, you know, yo, green message? And so that's what we need Redis for. Redis will tell, will sort of broadcast the message to all the relevant graph services.
So then the next thing I did was, okay, we have this thing, we have this function that I showed you before, convert the object to snake case, which basically just takes all the, what do you call that? The key names on the incoming Kafka messages and just converts them. I was like, you know, what are those key names? Right. Like we're running a business. So, you know, we have data that has that are key value pairs, you know, and the names of the keys are things like, you know, property ID, listing ID, primary property ID, property photo ID and that kind of stuff. So those words, property ID and listing ID and so forth, we have to keep converting those to snake case in order to send out their messages. But I'm like, well, we only have maybe like, I don't know, at most like 300 of those words, right? And after you've converted like the word listing ID to snake case, why would you expend the computational power to do it again when you've already done it, right? So long story short, let's memoize this snake case. So that all the keys that we ever attached to objects, you know, if we've already computed the snake caseification, so to speak, of a key like listing ID, now let's just grab it from the cache. And lo and behold, lodash, old and venerable as it may be, came in useful with the memoize function.
Comments