When you implement a solver in Node.js, you can have four arguments. You have the root, you have the arguments, you have the context, which you know, the root is the current object, but then you have the arguments for the solver, the context which can include your Node.js request, response, database connections, all things, and then the info object which includes the definition of the query that you are computing.
Well, take that into your mind and just wait for a second. Now, what you can do now is use this, you can create, you can take an arbitrary object and JSON-ify it, right? You can call JSON stringify it. If you do that, depending on the order of the properties, you will get different JSON. However, there is a module called save stable stringify, which independently of the ordering of the properties, it will always generate the same JSON. So, what we can do is we can use this module and combine it with the resolver anatomy, the info, the data on the info object, the root and all those things to create a cache, a hash key for an arbitrary hash key for that specific resolver.
Now, how is it implemented? Well, what you can do, as you can see here, is pretty simple. We navigate the info object to get the selection, the current field selection, and then we create an object including the current resolved object, the arguments, the fields, and some more parameters. It's pretty great, you see. We can compute a cache key for each GraphQL resolver. So, this is what we call the zero-second TTF. We are deduplicating all the resolver accessing your data.
Adding some caching is not improving much here because the target services are mostly very easy. They don't require a lot of traffic. Okay, sorry. They don't require a lot of CPUs to compute. They don't have a database. They don't have nothing. However, these adding more caching will change in case you need more. Adding more time here will improve your performance if the target services are not fast enough or slow or something. Well, all of this is very good, right? But in-process caches are problematic. So we can't really increase the time too much because it's all in process, right? So if it's all in process, if the data expires on my node, it's not expiring on the other node. So how can we implement that? Well, you know, one of the good solutions is to use something like Redis to implement a shared state, a shared cache, between all the nodes. Yeah, but we tried that, and we implemented it, and it did not work. And it did not work, well, mainly because, you know, we have in our benchmark, a hundred graphical queries per second, which each one were invoking 20 resolvers. And this turns around that, you know, if you want to fetch that data from the cache, this is two hundred to two thousand Redis Gets per second. And unfortunately, at the round trip time of Redis, the round trip time is 0.5 milliseconds. But the actual round trip time is 15. So, can't do much.
Comments