Turbopack Persistent Caching

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

FAQ

Tobias Koppers works for Vercel at the TurboPack team.

The main goal for TurboPack's development was to achieve instant builds and focus on incremental build performance.

Persistent caching is important for TurboPack to ensure that builds are incremental and efficient, even after process restarts.

TurboPack focuses on bottom-up caching and incremental build performance, unlike other tools that often use top-down caching.

TurboEngine is the caching engine that powers TurboPack, enabling granular caching and automatic cache invalidation.

TurboPack automatically tracks dependencies and invalidates caches only for changes that affect specific nodes in the dependency graph.

TurboPack plans to become a general-purpose bundler, making it easier to integrate with frameworks beyond Next.js.

No, TurboPack and TurboRepo are independent tools with different purposes and should not be confused.

Persistent caching introduces challenges such as process restarts, environment changes, and the need for granular cache invalidation.

Initial build times were initially slow due to large data dumps, but improvements in database and caching strategies have mitigated this issue.

Tobias Koppers
Tobias Koppers
29 min
12 Jun, 2025

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Tobias Koppers from Vercel TurboPack team discusses implementing persistent caching for long-term web application development, foreseeing significant growth in application sizes over the next decade. AI's role in code writing, TurboPack's vision for instant builds, challenges in build control, and the shift to incremental performance with caching for faster builds. TurboPack emphasizes making incremental builds fast and every build incremental, focusing on trustable and granular incremental builds, efficient granular cache handling in TurboEngine, and automatic granular cache invalidation. The system optimizes performance through efficient persistent cache integration, graph state persistence, database optimization, custom persistent layer creation, and optimizing build time efficiency. TurboPack stands out with its unique bottom-up caching approach, expansion to a general purpose bundler, and plans to enhance framework compatibility. Additionally, the comparison with ESBuild highlights the emphasis on incremental builds and detailed cache granularity with a token-based, almost AI-like approach.
Available in Español: Turbopack Persistent Caching

1. Tobias Koppers on TurboPack and Persistent Caching

Short description:

Tobias Koppers from Vercel TurboPack team discusses implementing persistent caching for long-term web application development, foreseeing significant growth in application sizes over the next decade.

My name is Tobias Koppers and I work for Vercel at the TurboPack team. And I want to talk to you about the story about persistent caching that we implemented for TurboPack. So when we started working on TurboPack, we wanted to build something for the next 10 years. So we first reflected on the last 10 years of web development, and we noticed that web applications have grown really much. Like 10 years ago, some people even used normal script text and didn't even use a bundler on web dev, or used Grunt or Gulp to concatenate stuff. So now we extrapolate that for the next 10 years. We'll see web applications will be enormous in size, and that will be a problem.

2. AI's Impact on TurboPack Build Vision

Short description:

AI's role in code writing, TurboPack's vision for instant builds, challenges in build control, and the shift to incremental performance with caching for faster builds.

And we made this thing basically two years ago, and now if you look at this retrospectively, it makes much more sense. Because now you see AI is writing all of the code, and you can expect AI will write a lot of more code in the next 10 years. So it kind of makes sense. But we also had this goal of instant builds for TurboPack. We had this vision that nobody should ever wait for a build to finish. You basically should push your code, and instantly you can directly share that code in the preview deployment or the finished build with your coworkers. Or even AI pushes code and wants to directly share it with you in kind of a feedback loop.

So this is a little bit in contrast with each other, and there are even more problems with that. We can't even control all of the build process. We want to allow custom plugins, custom transformations, like webpack loaders, or we can't even control the user code and all the NPM libraries they throw at us. So with all of that, we can still use Rust to write native code, just parallelism to use other cores, but that won't help us with all these problems. Because if there is something slow in a custom plugin or in a webpack loader or in anything we don't have control of, that will break our goal of instant builds.

Our approach was we wanted to change the focus. Instead of focusing on raw build performance, we wanted to focus on incremental build performance. So we maybe have to accept that some stuff is slow in the initial build, but we can at least make sure that we don't ever re-execute it again. We basically make it faster than all the other builds, all the incremental builds. And in fact, in practice, nearly every build can have a cache. On local development and dev server or something, you can have an accessible disk, we can store stuff there, we can cache stuff. And even on CI builds, nearly all good CI providers have the ability, the infrastructure to provide you with some caching capabilities. So we can use the cache. That's great.

3. TurboPack Performance Strategy

Short description:

Two-part motto for TurboPack performance: Make incremental builds fast and every build incremental. Need for persistent caching for incremental builds and challenges with user trust in the system.

That's great. So we came up with this two-part motto for our vision or motivation for TurboPack performance, which is make incremental builds fast and make every build incremental. And our idea was that maybe this kind of approach has the potential to fulfill our goal of instant builds. Even with stuff that is slow or stuff that we don't have control of, or even the size of the web applications in 10 years. So let's look into what we need to achieve that.

I want to look at the second part of the motto first, like make every build incremental. So for making every build incremental, the first thing we need is some kind of persistent caching. In-memory cache is maybe good enough for development because there, most of the builds are within a single process and you not restart your dev server that often. But to get the full story, we need to be something that is persistent. You want to restart your dev server and still have an incremental build on restart.

And for CI builds, we don't even have a process running. We need something that is working across restarts of the process. We need a persistent cache. You might think, okay, now the cache is persistent, now every build is incremental. At least everything is possible. But not that fast. There's a problem, which is the user. Because the user can delete the cache, it can disable caching. And they will actually do that if they don't trust the system.

4. Ensuring Trustable and Granular Incremental Builds

Short description:

Build a trustable system for users' confidence, avoid incorrect caching practices, focus on persistence, confidence, and granularity for incremental builds.

If they are not 100% sure that their changes are reflected correctly by the build system, they might clear the cache just to be sure that it's not their fault or it's like maybe it's a bundle fault or whatever. So we really want to build something that is really trustable. We want to build a trustable system so that users have the confidence that their changes are reflected correctly and caching is not the problem. So we basically want to avoid all the hacks, workarounds, or even tradeoffs for better performance that are not 100% correct from caching perspective. We can't even trust ourselves to write correct cache in validation code logic. Because that is also a common source of errors.

These two ingredients, like persistence and confidence, are what we think are the main ingredients to make every build incremental. But we also want to look at the first part of the model, like make incremental builds fast. If we need to sum up that kind of motto in a single ingredient, it would be granular. So we want to like granular means we only touch stuff that is affected by a change and not touch everything just because we can. So for cache invalidations it could mean we only want to invalidate and therefore recompute work that is affected by the change. That means like in practice we really want to do the minimum amount of work that is needed to apply the changes or make an incremental build.

Doing work fast is great, but doing no work is even better. So that's kind of the motto of that. That might be simple for simple changes like changing some line of source code. But we also want to make it work for all kinds of changes. That means you might make complex changes, like changing a dependency or changing your configuration. That kind of change should also be reflected in a granular way. That could also be more interesting with production builds, where you have this kind of optimizations that work on the whole application, like shared chunk or something like that, where a simple change, like changing some imports, can have wide-ranged effects on the compilation, on the output application. So we want to handle all of that.

5. Efficient Granular Cache Handling in TurboEngine

Short description:

Avoid full cache invalidation by focusing on granularity in cache access. TurboEngine automates cache handling, tracks dependencies, and efficiently manages cache invalidation. By re-computing only affected parts, the system optimizes performance and reduces unnecessary computation.

We never want to be in the scenario where we need to invalidate the whole cache because somebody changed the config option or something like that. Granularity is crucial not only for compilation but also for cache access. The idea behind TurboEngine and TurboPack is to enable developers to work with reduced caching complexity and implement granular caching effortlessly. By automating cache invalidation based on dependencies, the system ensures efficient re-computation without the need for manual intervention.

The magic lies in the system's ability to track dependencies and efficiently handle cache invalidation. Changes in the code trigger a ripple effect in the dependency graph, leading to re-computation of affected tasks. This approach ensures that only necessary parts are re-computed, resulting in performance optimization. By accessing and updating only the nodes affected by a change, the system achieves granularity, enhancing performance and minimizing unnecessary computation.

The focus on granular cache access and automated cache invalidation streamlines the development process. With TurboEngine's capability to work on a per-function level, developers can build sophisticated caching systems without worrying about intricate cache management tasks. The system's efficiency in re-computation, driven by dependency graphs, ensures that changes are propagated effectively, optimizing performance by re-computing only the relevant parts affected by the modification.

6. Automatic Granular Cache Invalidation

Short description:

The system automates cache invalidation, tracks dependencies to enhance efficiency, and ensures granular re-computation. With a sophisticated dependency graph, changes trigger efficient re-computation, optimizing performance. The approach focuses on touching only the affected parts, enhancing performance and minimizing unnecessary computation.

The real magic happens in regards of cache invalidation in the system. It doesn't allow you to write cache invalidation code. It basically does it automatically for you. It tracks whatever you read in a function, which kind of excel stuff you read, what kind of values from other functions you read, and it builds up a dependency graph of all these execution. Basically, it depends on the graph of the execution of the compilation. And with this dependency graph, it can make implementables really efficient.

Everything in this example is a really simplified version of the dependency graph. It's like billions of nodes in practice. But in this simplified version, it might start with a file has changed, so then one of these nodes, we call them tasks in this case, is invalidated and re-computed, and that might cause more stuff to be invalidated. For example, when the file has changed, the module.js has changed, parsing might need to be re-executed, because the content has changed. So basically, this change bubbles up the graph. It follows the dependent edges, and that might invalidate more stuff.

But it could also happen that we read some existing cached unchanged data, or something doesn't have any effect on that. If I only change some kind of line of code without touching the imports, maybe the chunking does not have any further updates, or it cancels out there. But in the end, the whole stuff finishes and re-computes, like make an implementable by only touching the parts that are affected by the change. And that's what I mean with granularity. So we make an implementable by only touching, and only reading, and only accessing nodes of the graph that are affected by the change. And that has a lot of benefits.

7. Efficient Persistent Cache Integration

Short description:

The system optimizes performance by computing only necessary elements, enhancing efficiency, and supporting persistent caching. Storing the graph state on disk using a key-value store facilitates efficient data access. The architecture seamlessly blends with persistent caching, ensuring rapid implementable performance.

It's really great for performance because we only re-compute what's actually needed, and we don't touch other stuff at all. It's also beneficial for persistent caching as we only restore accessed cache items, avoiding unnecessary operations on inactive nodes. The impacted nodes after a change are typically a fraction of the entire graph, leading to fast performance independent of the application size.

We now have the engine for implementables and aim to incorporate persistent caching. The cache in this system doesn't mirror the graph visually but stores the state of the graph, with nodes holding the data. To enable persistent caching, we plan to store this graph on disk using a key-value store and a database for efficient disk access. In the initial stages, we utilized LMDB as a simple key-value store, proving effective for storing and accessing required data.

The architecture of implementables seamlessly integrates with persistent caching. The approach of storing and selectively retrieving data from disk aligns with the efficient in-memory caching mechanism we previously tested. This strategy ensures fast performance for implementables, demonstrating the successful adaptation of the architecture for persistent caching.

8. Graph State Persistence and Database Optimization

Short description:

The graph state is cached, nodes hold data, requiring persistence on disk using key-value storage and database. Initial use of LMDB for storage worked for implementables, ensuring fast access. However, dumping the entire graph into the database caused slow initial builds and large disk sizes. Transitioning to a more efficient database like RoxDB was essential for improvement.

The state of the graph is what is cached, and the nodes store the data. So to do, if you want to add persistent caching, we basically want to persist this graph on disk. And it's actually not that complicated. So we can just use a simple key value store, like for every node, store the data, store the edges, and that stuff. And so basically, we need a database to store that. And database mainly function as dump everything to disk, and then access it from the database again. So basically what databases do.

So great. So we started out with, we need a simple key value store. We use a simple database. We use LMDB in this case in our first iteration. And that kind of works. So it worked for the implementable part. So we stored everything to disk, and then we accessed only the parts that are needed. Like storing a few keys from the database is cheap. That works. So implementables are really fast. So the whole architecture of implementables works for persistent caching. Kind of makes sense, because we already tested that for in-memory caching, and that worked too. So that works.

But there was a big problem with that. And the initial builds were really slow. And yeah, the problem is, we basically have to dump the whole initial graph to disk, and that's like millions of nodes, and dumping millions of keys into the database. It's not the fastest thing you can do. It also has a problem that the disk size was really large, because a big graph stored in the database created a big database. So we iterated further on that. We tried a better database. Maybe you thought, maybe the database is not the perfect thing. We used a really simple, embedded database. So let's use a production-ready proof database, RoxDB in this case, for meta.

9. Custom Persistent Layer Creation

Short description:

It solved disk size and initial build speed issues. Database access pattern led to a custom persistent layer creation. The custom layer optimized the initial build overhead and incorporated existing database features. Benchmarking showed significant improvement in build times.

And that solved a few stuff. It has compression built in, so that solves this disk size problem. It also is much faster than MDB, so it also kind of solves the initial build problem. The initial build was now an acceptable speed. Basically, it adds 50% overhead to the build. Not OK, but might be acceptable, because in the end, most of the builds will be incremental anyway, and that's fast. Sounds good.

Yeah, but we didn't want to stop there. We wanted to work further into that. And we noticed that our database pattern is a really unusual access pattern. Usually databases are optimized for... I write a few things and read a lot of stuff, so read-heavy workloads. And our workload, at least for the initial build, was like... We had no reads at all, because there was no database at all at the start. And then we write a gigantic amount of data into the database. And for incremental builds, it's only a tiny amount of data that we read and write that worked well. But initial build? Problematic.

So, we wasn't able to find a database that has this different access pattern, so we ended up writing our own persistent layer. It's not a complete database. It's more like a simpler version specialized for a usage pattern, and makes a lot of trade-offs that it's optimized for our usage. Like, we don't have parallel writes and that stuff, so we can optimize a few things. And that basically solved that overhead of the initial build and made it much smaller. So, that was not an acceptable performance. We also stole a lot of stuff from the existing database, like compression and that stuff. We also benchmarked that for a real large-scale customer application, which has one-minute build time.

10. Optimizing Build Time Efficiency

Short description:

LMDB to WuxDB for faster persisting. Custom layer enhanced performance. Memory and incremental build improvements with WuxDB usage and custom solution.

If you look at the numbers here, we started with LMDB for this one-minute build time. It had three minutes of persisting time. That seems really large, and it was kind of unacceptable for us. And then WuxDB improved on that a lot. It's like six times faster for persisting. So, now the one-minute build was only extended by 50% of the build time for persisting. And with our custom layer, we were able to improve even further by improving it by another factor of 3x to make a better performance.

You also see memory was improved by that, because WuxDB stored everything in a memory table in addition to a disk, and that's not the optimized thing to do. It also has problems like it allocates everything in a white batch first before committing it. So we made a lot of improvements to that. And you also see the incremental build performance. That was always good because it's more good by design because we don't read a lot from the database. It's not so much dependent on the database, but we were able to improve it a little bit by not using LMDB, but WuxDB and our own solution.

That's great. That's how we implemented persistent caching with our database. There are a few other layers on that. We have some snapshotting mechanism to take snapshots of the compilation while you're working on it because we never want to stop the dev server process just to persist something. We want to atomically take some snapshot of the graph and then persist it in the background so you're not stopped or your HMR is not affected by doing a background or doing a snapshotting or storage of the persistent cache. But if you look into this in a wider scope, persistent caching adds a few more problems or challenges.

11. Challenges of Incremental Builds

Short description:

Process restarts introduce new challenges. Granularity in applying changes crucial. Production optimizations and persisting caching pose challenges as everything is persistent.

Now you have process restarts which adds much more different changes to that. Before most of the changes were changing a file and that stuff. Now you have process restart which changes environment, environment variables could change, config creation could change. So all these new challenges we have to make our YTOPIC in a way that it's granular to apply these changes. Especially end files are really problematic because end files could be inlined into every module. So we don't want to invalidate every module because some end files have changed. So that was a little bit tricky.

There's also these production optimizations which I already talked about which are challenging to apply like changes on a granular level. There are more things to consider like network patches doing the build, like maybe you fetch something from Google fonts, like a font file, when to invalidate that, do we have to invalidate all, like caching and that stuff is relevant when you can't restart the dev server. Persisting caching adds a lot of new challenges to that because usually you can fix all the problems by restarting the dev server but now everything is persisted, so we can't accept any of these problems that cache are not cleared correctly or invalidated correctly because everything is persistent and we don't want you to delete the cache.

There's also the complete new challenge when it comes to CI, when you have actually spent some time uploading and downloading the build cache which could also affect your total time. Overall, it's also important to look at the whole deployment life cycle, not only on the TurboPack build, because usually TurboPack build is often the largest part of their deployment, but there, when we have incremental builds for TurboPack, then it might not be the case anymore. TurboPack build is really fast but then we have all the other challenges which also need to be made incremental, like we want to make Git clone incremental, build cache download but maybe only download what we need from the build cache. You have package management style, we can make that incremental too, like using a cache for that. There's this whole pre-rendering step, pre-generation step, happening usually after the build which can be cached, cannot be cached, yeah, it depends. So a lot of other stuff you have to consider when going with this incremental build tool. We coop up with other teams to make this happen so we can all benefit from that.

QnA

TurboPack Expansion and Differentiation

Short description:

TurboPack is open source and expanding to a general purpose bundler. Unique bottom-up caching approach sets TurboPack apart from other tools. TurboPack's architecture differs significantly from Webpack, emphasizing the long-term benefits of a new tool.

So thanks for having me, I hope it was interesting too. Okay, first, let's go for the most upvoted one. For TurboCache, there were open source servers, this new thing will be open source as well or vendor lock? What is TurboCache for TurboPack maybe? So yeah, TurboPack is already completely open source so it's currently only working with Next.js but we want to make it available as a general purpose bundler so it will be a standalone tool also. But it's already open source in this sense. Okay, nice. Then the next question would be how does this compare to other monorepo tools like Rush in terms of build cache and co-build features? Yeah, so our caching approach is a little bit different, our incremental build is a bit different. We usually build from bottom up, so we invalidate like the leaf nodes and build up. And most of the other tools, you do caching in a different way, they often do it like top down. So start with the compilation and look up stuff from the cache and cancel out by that. So that is a little bit different. But in general, monorepo tools are more on a higher level, like on a monorepo work on script level and TurboPack is more like on source file level, so like a bundler level tool. Yeah, that's how it compares, I guess.

Okay, yeah, I'm pretty sure that answered the question. And yeah, we have time for so many more questions. So Bobby has asked, is incremental build with TurboPack related to module federation? The feature itself is not related to module federation, but we eventually probably want to support module federation in TurboPack too, so similar to Webpack, but it's a different feature, it's not related to incremental builds and that stuff. But sometimes people use module federation for better build performance, so if that is used for build performance only, then it might be a replacement to use incremental builds with TurboPack. Interesting. Okay. The next one will be, can these features be added to Webpack instead of a new tool? Yeah, so we intentionally forked, like not continue to work on Webpack and improve that, but we made a new architecture and started TurboPack to make it able to use the new architecture, so like the whole architecture is really different from Webpack. Webpack is just top-down caching, TurboPack uses bottom-up caching. So it's really different, so basically we made the decision to not add it to Webpack, but instead write a new tool that is maybe harder to migrate to, but in the end, we hope that everyone will benefit from that in the long term.

Enhancing TurboPack Framework Compatibility

Short description:

Plans for TurboPack to support various frameworks in the future. Current focus on Next.js with plans for general-purpose bundler integration. Next.js availability confirmed with impending release of persistent caching feature.

Okay. We have another upvoted question, this one from Dennis. What are the plans for TurboPack to more easily support other frameworks? My experience is that if it's not React, Vue, or Svelte, it is a hassle. Yeah, currently that's true. So we currently focused on making TurboPack work with Next.js, and it's currently only working with Next.js, but our plan was always, and it's still, that we want to make it a general-purpose bundler, and then it will be easier to be integrated with other frameworks.

Okay, so at least for the future we expect that? It's for the future, yeah. Nice. And of course, there are more people interested in the framework, so they asked if there will be availability also for Next.js. Yeah, that's already available for Next.js. Persistent caching is not released yet. There's a secret flag to enable it, but it will be released, like we plan for it, in a month. But yeah, we're still working out the stability of that and when it's considered stable enough, then it will be released. Perfect, perfect. It's already feature-complete, I would say.

Clarifying TurboRepo and CI Feature Consideration

Short description:

Confusion between TurboRepo and TurboPack clarified. TurboRepo and TurboPack are separate tools. Consideration for CI feature with persistent caching cloud. Focus on improving memory usage in TurboPack for larger applications compared to ESBuild.

And, yeah, we have also another question about can I use TurboRepo Remote Cacher to share cache across my builds? So that's often a problem that people confuse TurboRepo with TurboPack. It's basically two independent tools, but yeah. I don't know about TurboRepo. I can't answer this question, but don't confuse TurboRepo with TurboPack. It's a different tool and it's basically not much related with the other. Yeah, the naming maybe is not helping us.

And Quentin is also asking, will this be a CI feature through persistent caching cloud? Yeah, we think about that. Currently it's this only, but there would be benefit of on-demand downloading the build cache from the cloud and we might do that. It depends on how the performance of that would be, but it's a consideration to do something like that. Okay. I said everyone is already ahead of time in everything that you are building.

And yeah, the next question is, my Purl GitHub Action Runner has 40 of RAM. ISO memory usage is pretty high. Is that a focus point other than speed? Yeah, for this benchmark, we used a really large application that has a bigger memory usage, of course, not because of persistent caching, more because the build just is bigger. And yeah, it's a focus point. We currently have maybe too high memory usage in TurboPack and we work on that. Yeah, but four gigabytes might not be enough for a really big application. Okay. I don't know. Yeah, how does... Oh, no, I answered wrongly, but how does it compare to ESBuild? You can answer that. Yeah, ESBuild is also a bundler. I think our focus is a little bit different.

Comparing TurboPack and ESBuild Cache Granularity

Short description:

Focus on incremental builds over raw build performance like ESBuild. Emphasis on custom integration. ESBuild excels in raw speed compared to TurboPack. Detailed cache granularity with a token-based, almost AI-like approach. Considerations on build time impact due to permanent cache fetching.

We focus more on incremental builds while ESBuild focuses more on raw build performance. We also want to allow more custom integration, more custom stuff, and that's why our focus is different, as I explained. But ESBuild is also a great tool. It's super fast. It's probably raw speed is much faster than TurboPack. We hope that we've been the better on the incremental build part. But also a great tool. Yeah.

Let's have some more questions before we have the next speaker. So what is the cache granularity? It is per file, per module, or per build? It's lower than per file. So we have it based on function level or implementation. At least per module, but usually a module has 50 to 100 smaller functions that do something like parsing, filing, extracting dependencies. Each dependency has its own cache. Resolving has its own cache. It's really granular. And sometimes maybe even too granular. But it's really granular. It's almost token-based. It seems like AI. And yeah.

Bobby is asking, can build time be higher because permanent cache needs to be fetched every time? Yeah. It depends on the build. If it's a really fast build, then sometimes it's faster to just do the build instead of having persistent caching. But the benefit of that is usually that the build cache download happens before the build. It can happen in parallel to git cloning and to npm fetching. So you usually don't pay any time for build cache download. At least what I tested with RESTful deployments. But it also depends on your CI provider.

TurboPack Cache Handling and Comparison with NX

Short description:

Discussion on build cache download and upload process. Garbage collection concerns and comparison with other applications in terms of disk space usage. Comparison with NX for monorepo and details on the code base used for persistent caching bench.

At least what I tested with RESTful deployments. But it also depends on your CI provider. Can it do download in parallel? Can it do... Does it have a blocking step of build cache download? And the build cache upload also usually happens after the deployment is finished. So everything is already ready, and then in the background we upload the build cache. So you shouldn't take any hit for build cache download. Okay.

Yeah, let's have a couple more and then we can finish here. So when will garbage collection happen? Should I be worried about filling up my disk with TurboPax cache? Yeah, that's a good question. We don't have garbage collection for that yet. You might be worried about filling up a disk, but compared to other applications, it's probably not that thing, that filling up the disk. Maybe npm nodemodules is more filling up your disk than your cache. Yeah, probably the raspbian if you didn't clean it, and there's a lot of nodemodules there. That can also happen. Okay.

Bobby is also asking, how is TurboPax comparing itself with NX for monorepo? That's probably also a TurboRepo question. I think it's confusing. I would skip that. What size code base was used for the persistent caching bench? Pretty big one. One of the biggest we've found. I think it's 70,000 modules. So a decent sized, and also pretty old. So yeah, probably a lot of legacy stuff in there and icon libraries. Pretty cool, actually. Well, I think we can leave it here. If you have any more questions for him, you can go to the spot, and you can ask them in person to him. Yeah, thank you so much, Tobias. Thank you for having me. That was amazing. No.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

The Core of Turbopack Explained (Live Coding)
JSNation 2023JSNation 2023
29 min
The Core of Turbopack Explained (Live Coding)
Tobias Koppers introduces TurboPack and TurboEngine, addressing the limitations of Webpack. He demonstrates live coding to showcase the optimization of cache validation and build efficiency. The talk covers adding logging and memorization, optimizing execution and tracking dependencies, implementing invalidation and watcher, and storing and deleting invalidators. It also discusses incremental compilation, integration with other monorepo tools, error display, and the possibility of a plugin system for Toolpag. Lastly, the comparison with Bunn's Builder is mentioned.
Rome, a Modern Toolchain!
JSNation 2023JSNation 2023
31 min
Rome, a Modern Toolchain!
Top Content
Rome is a toolchain built in Rust that aims to replace multiple tools and provide high-quality diagnostics for code maintenance. It simplifies tool interactions by performing all operations once, generating a shared structure for all tools. Rome offers a customizable format experience with a stable formatter and a linter with over 150 rules. It integrates with VCS and VLSP, supports error-resilient parsing, and has exciting plans for the future, including the ability to create JavaScript plugins. Rome aims to be a top-notch toolchain and welcomes community input to improve its work.
Server Components with Bun
Node Congress 2023Node Congress 2023
7 min
Server Components with Bun
Top Content
Bun is a modern JavaScript runtime environment that combines a bundler, transpiler, package manager, and runtime. It offers faster installation of NPM packages and execution of package.json scripts. Bun introduces a new JavaScript and TypeScript bundler with built-in support for server components, enabling easy RPC with the client. This allows for code splitting and running code that streamingly renders React or any other library from the server and mixes it with client code, resulting in less JavaScript sent to the client.
Challenges for Incremental Production Optimizations
JSNation 2024JSNation 2024
32 min
Challenges for Incremental Production Optimizations
TurboPack is a new bundle similar to Webpack, focusing on incremental builds to make them as fast as possible. Challenges in production builds include persistent caching, incremental algorithms, and optimizing export usage. The compilation process can be split into parsing and transforming modules, and chunking the module graph. TurboPack aims to achieve faster production builds through incremental optimization and efficiency. Collaboration and compatibility with other ecosystems are being considered, along with the design of a plugin interface and tree-shaking optimization.
Parcel 2: the Automagical Bundler
DevOps.js Conf 2021DevOps.js Conf 2021
8 min
Parcel 2: the Automagical Bundler
Parcel 2 is a ground-up rewrite of Parcel 1, a fast and scalable zero-configuration web application bundler used by large companies like Atlassian and Adobe. It offers a zero-config approach with good defaults, making it production-ready out of the box. The new features include a revamped plugin system, a configuration file, transformers for file conversion, optimizers for code compression, target support for different browsers, diagnostics for error debugging, and named pipelines for data and JavaScript in different formats. Parcel 2 also supports different import scenarios, such as importing JSON files with named pipelines and using query parameters for image optimization. It includes various performance improvements, stable caches, optimized data structures, enhanced code splitting and bundling, improved scope hosting, and better support for monorepos and libraries. A React example is provided to showcase the simplicity of Parcel and how to use it with React.
Bundlers: A Deep Dive into Modern JavaScript Build Tools
JSNation 2025JSNation 2025
20 min
Bundlers: A Deep Dive into Modern JavaScript Build Tools
Edoardo, DevRel at Storyblok, explains the importance of JavaScript bundlers and discusses Storyblok's migration to Vite. Challenges with old JavaScript applications are illustrated, emphasizing issues with global variables and dependency control. Optimizing JavaScript module loading through ES modules is discussed, highlighting browser compatibility and performance concerns. The process of creating and structuring JavaScript bundles is detailed, focusing on dependency graphs and module organization. Techniques for managing bundle execution, utilizing abstract syntax trees for code parsing, and implementing optimization strategies are explored, with a specific emphasis on Vite, hot module replacement, and development enhancements.