So don't get me wrong. I think bundle size is an important thing to look at before adding a dependency. But the debate about what is lightweight and what isn't is not really the most important one when it comes to such a central piece, like your aging state manager, especially because there's one metric that is easily left out, and that is the bundle size you save by code you don't have to write. I think a library like RackQuery really pays for itself, because the more you use it, the more it saves you code that you don't have to write yourself.
So when checking the bundle size of a library, it's important to not think about the size that it adds immediately, but also about what it can save you in the long run. And on that scale, I think RackQuery is a clear win, because most custom solutions would probably be larger, or they might fail in some edge cases, because, yeah, caching and caching validation is really hard.
Okay, the next myth I want to look at is the fact that with RackQuery you can't even fetch on a button click. And I get that a lot, and the argument is that it's hard for RackQuery to do imperative data fetching. And it's true because RackQuery is declarative by default. What we do is we write the use query hook, and we pass a query key and a query function to it, and it runs automatically for us. Now this code would try to read the query from the cache, and if they don't exist, it will go and fetch them for us. So far, that looks good, but now let's try adding some filtering to our task list.
When we add a filter form, we run an apply callback, and when that gets called, we want to refetch that with those new filters. So if we explore what use query returns on our own, we might find the refetch function, and we might want to just pass a refetch function to it. I think this seems reasonable, but except that refetch doesn't take any argument, so this just doesn't work. And I get the frustration about this, but it's just not how RackQuery was designed to work. Because see, if we have a hardcoded key like tasks, and we'd refetch that with different arguments, we would not only override the data that we'd cached for those other filters, we would also run in race conditions with that effect.
So RackQuery has solved both of these problems with the declarative approach by putting all our dependencies, and that is everything that we use inside the query function, into the query key. And that means we have to store our applied filter somewhere, and in this example I've just put them into React state. Now when those applied filters change, the key changes and RackQuery will see a new query key and then it will fetch And this approach will get us from the imperative thinking, like if I click that button, I want to do some fetching, towards the declarative form of I want data that matches this state, and how it changes is actually irrelevant. And it's also relevant where we store those applied filters.
I've used useState before, but it's actually a pretty forward change to make a navigation with different search params if we use something like 10StackRouter. This is of course type safe, depending on the search param schema that we have defined on our route. And now we get a bunch of things for free, like for example shareable URLs or browser back button navigation, which I think is really cool. Another cool thing is that if we change the filters back to something that we've already searched, like I said before, we get an instant result. And that's because RackQuery caches everything separately by the key. In that sense it's a simple document cache where the complete responses will be stored for any given key. So yeah, in this example this also means that if a task is both in status open and in priority high, it will be in both of these caches. Because there is no normalized caching in RackQuery. In a normalized cache, every piece of data is stored only once, and other parts only reference it to avoid that data duplication.
Comments