And this small layer is actually part of the Yoga Core. I want to thank here Ada, who made this all possible and came up with the initial idea. So, with this, by building the GraphQL engine and the HTTP server around those parameters, we got something that actually runs everywhere.
So, another part of fitting into an existing stack that was really important to us is that schema building should be possible in any way. Yoga should not be opinionated about how you build your schema. In fact, we actually ship a utility for doing an STL-first schema, but it's not mandatory. And if you don't want to use it, it won't be within your code. If you, for example, deploy Yoga to Cloudflare Workers, you will bundle your code. And we made sure that Yoga supports free shaking. If you don't use the utilities, they won't be part of the bundle. You can choose your own preference, whether you're using STL-first, using the GraphQL tools or GraphQL modules, if you're more in the enterprise sector, or whether you want to do code first with GraphQL.js, not recommended, POTUS or GQTX. It's your choice. Your schema, Yoga accepts any GraphQL schema.
Then, the next step for us that we wanted to make sure, is that Yoga is fully extendable. With Yoga version 2, we already made sure that parsing, validation, execution and subscribing is completely extendable, thanks to the envelope engine. But for Yoga version 3, we wanted to get a step further. Instead of only allowing to extend the GraphQL-specific things, we also wanted to allow taking full control of the HTTP flow. That means, hooking into routing, hooking into request parsing, hooking into response building, in addition to the existing hooks for parsing validation and execution. And from that, we were able to build very powerful plug-ins that some of them I'm going to showcase now one that we're very proud of is the response cache. Basically, it's a cache for GraphQL operations, so if you execute a query operation with the same variables over and over again, you can cache it either globally or by user session. And then when the GraphQL operation is executed in a second or a third or whatever time, the results will be served from the cache instead of the execution that you're having to re-execute the whole GraphQL operation. That can drastically reduce your server load and you can either store the GraphQL execution results in memory. Or if you have multiple server replicas, then you can use either Redis or Upstash as an alternative cache for shared cache between multiple instances.
Another thing is Automatic Persistent Queries. That was first made popular by Apollo, and basically it's a protocol for registering reoccurring GraphQL operations on a server in order to reduce client to server traffic. So a client can send an operation, register it, and then if it needs to re-execute the GraphQL operation, it can only send a hash on the second try, and the server already knows the operation from that hash, and thus the traffic from client to server is very low, because what we saw is that usually as your GraphQL operations get really big, the biggest bottleneck in the whole client to server scenario is actually the client uploading the GraphQL document to the server, and as before, it's just plug in your plugin and that's it.
Another thing is persistent operation, which is a bit similar to automatic persistent queries, but also adds another security layer, because where APQ allow to ad hoc register queries and to save brand worth, persistent operations only allow executing specific operations ahead of time. So any other GraphQL operation, except those that are in the persistent operations store that tries to get executed, will be rejected. And this is a pattern that we usually use in GraphQL production deployments, because there we usually want to avoid any arbitrary queries from clients we don't know. Another thing that we found very interesting is, what if we take a GraphQL API or schema and convert it into a REST API? Today we have a lot of debates around, oh, GraphQL versus REST, what is the better one? We think both have their use cases.
Comments