So, you know, just some general things that we had seen which I really liked was probably the biggest one here was a 80% reduction in cloud costs across the board. So when we rolled this thing out en masse, our CI costs, most of our cloud costs were reduced down by 80%, which was a staggering number at a company of that size. On average, we would see about a 30-minute reduction in our build times. And on CI-wise, we would see, I think it was per project per day, about 200 CI hours saved.
So each project a year would save, you know, let's say 80,000 hours a year in, like, CI hours. And for a developer, in working in, like, your development mode locally, assuming you might have to switch a branch here and there, we kind of just put it as if you have to do five, you know, restarts of your dev server, we've, you know, worked out that, okay, you know, roughly you're looking at about seven hours a week saved per developer in the company or, you know, roughly 400 hours a year per dev. On the financial side, though, we also saw some great numbers. So for our own personal use, we've seen about $200 million a year savings from the project or for what we're putting into it versus what we're getting out of it, about $200 million a year.
We've had outside users who have also adopted this, and they've been kind enough to give us their own financial information about the reductions. And we're seeing, you know, again, for another user, $32 million a year. And what I had really found impressive was, considering we were looking at that medium time to merge, which we kind of treated that as like product latency. Between creating something and getting it to production, what's the latency between that? And we have, you know, most of the builds are really massive, so we see the CI taking upwards of 35 minutes. So, you know, per year, just on a single repo, we were able to recuperate over 1.6 million hours in product latency.
So that's the time from getting the PR to getting it into production. And that's just one of the repos that we've got among the, you know, thousands that are there. So there was really a lot of value that we ended up seeing from it, which is obviously great for us to see our theories had kind of worked out. Another aspect that we've seen a lot of return on is actually the bandwidth saving, since a lot of what we've focused the project on is really good optimization, really good chunking, intelligent kind of methods on how to split the app up, instead of just, like, Webpack, super verbose kind of solutions there, try to refine them a little bit more.
And, you know, we've had users report up to a 50% savings in egress bandwidth on their infrastructure, which for them also translated into a couple billion dollars a year, which, again, was quite nice to see. So just to kind of wrap it up here, some of the future ideas that we have been looking at with RSPack is, again, that language agnostic solution is going to be very useful for TypeScript. We're looking at making TypeScript a first-class citizen, which would allow us to do things like link time optimizations, being actually able to use all the typing information to further optimize, tree shake, remove dead code on, say, like, private methods that aren't exported out, things that usually get lost in the transpilation process.
There is the possibility of looking into, like, integration of type checking into the bundler or creating something like a language server since the difference between, say, bundling and type checking are actually very few. There's a lot of overlaps in how they're built. Some other things we've kind of been looking at is you've probably seen some of these features from, like, TurboPack that they've been speaking about. One of them would be, like, the remote caching at the function level. So that is something we're planning to open source and release for you to self-host for RSPack, I think probably in the next quarter or two. Can't remember the exact timeline on there, but it'll essentially mean anybody who's building your application, you can recycle their caches and have a distributed remote cache that everybody can take advantage of free of charge. And another big one would be export-level code splitting.
Usually when you split your application right now, you're splitting it based on, what do you call it, you're splitting it based on the module. The whole file with whatever exports are in there can be sharded off. But what we're looking at doing is actually export-level, being able to take an export and relocate it based on how it's used, which will give us a lot more cross-module optimization and just a lot more, you know, better output of our artifacts, smaller payloads.
Comments