Rspack Recently Was Awarded Breakthrough of the Year at JSNation

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

For those who have not heard of Rspack, it's a 1:1 port of Webpack to Rust.

But, did you know that rspack is actually the 4th iteration of native bundlers our team has designed, and it originally started out as a plugin for esbuild. In its development, we have rewritten esbuild & rollup in rust, taken apart parcel to understand it better, and overall have reviewed every bundler on the market during the development of rspack before finally picking the webpack api design for the project as it is known today.


In this talk I will share the behind the scenes of its creation, why we built it, what the future for rspack looks like, and our own experience + business data we have gathered with it in supermassive projects at ByteDance.

This talk has been presented at JSNation US 2024, check out the latest edition of this JavaScript Conference.

FAQ

The purpose of this FAQ section is to provide concise and informative question-answer pairs that capture the most important and relevant information from the content.

The FAQ section can improve SEO by aligning with common user queries and search terms, aiming to appear as featured snippets in search engine results.

An important feature of the FAQ content is its ability to provide concise and relevant information that addresses common user queries.

FAQ questions should align with search terms to enhance visibility in search engine results and improve the chances of being featured as snippets.

The FAQ section aims to improve search result visibility by answering common queries in a concise manner, making it more likely to be featured in search snippets.

Zack Jackson
Zack Jackson
31 min
18 Nov, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Today's Talk discussed RSPack, a Rust rewrite of Webpack that won Breakthrough of the Year at JS Nation. RSPack was developed at ByteDance to address the complexities of their WebInfra and provide a suitable bundler for native and exotic environments. The development of RSPack focused on improving on ES Build's capabilities, reducing CI wait times, and maximizing product velocity. ESBuild and Webpack had various weaknesses and limitations, leading to the decision to create a new bundler architecture. RSPack aimed to be language-agnostic and prioritize artifact integrity, performance, productivity, and business value. API design emphasized a balance between performance and versatility, tailored for larger businesses. RSPack achieved significant reductions in cloud costs and build times, and future ideas include TypeScript optimization and remote caching. For smaller companies, considerations when choosing a bundler include performance, chunking, reliability, and user experience.

1. Introduction to RSPack

Short description:

Today, I will talk about RSPack, a Rust rewrite of Webpack that won Breakthrough of the Year at JS Nation. I will discuss the rationale behind its creation.

♪♪♪ Alright, well, thank you for coming to my talk. Okay, so first, does anybody know what RSPack is? I can't see anybody, so... Okay, cool. So, somebody does. For those of you that don't know what it is, it's essentially a Rust rewrite of Webpack that we ended up doing. So, really, what I'm going to talk to you today, since I'm at JS Nation, the Amsterdam conference, RSPack actually won Breakthrough of the Year, so I thought this would be a fantastic opportunity to speak a little bit about the rationale behind creating it.

2. WebInfra at ByteDance and the Origins of RSPack

Short description:

WebInfra at ByteDance supports a vast set of tools and frameworks to unify the DX of everything into central solutions. The complexity and scope of responsibility are much larger compared to open-source projects. On-call obligations are high, with a quick response and resolution time. Writing a custom bundler became necessary due to the complexities and the need for a suitable solution that works well with native and exotic environments. RSPack originated from Link Speedy, an in-house cross-platform development solution.

So, just a little bit of context into what WebInfra at ByteDance looks like in terms of its scale is, one, we have to maintain a very vast set of tools, frameworks, systems that support all the business functions of the company. We've got roughly over 1,000 unique apps from web or native apps. We've got tens of thousands of front-ends, and we do tens of thousands of production deployments every week. So, there's quite a lot of throughput in general and a lot of strain on anything to do with infrastructure. And our job, really, is to unify the DX of practically everything into central solutions that can work for various functions of the business. So, that's a bit of context for what we do.

So, one of the things is there are a lot of complexities in underlying build tools and in the kind of stacks that we have to work with. So, kind of what we had discovered along the way is operations, specifically between, say, open-source versus internal operations, do have some key differences. The main ones that we have kind of seen is usually, I would say, just the complexity of the business that we have to support is usually larger than what you would see in your typical kind of open-source projects. The scope of responsibility, as well, is also quite different. From what we often see in like an open-source project is usually you would have a kind of single point of focus. So, if we think of like Next.js, it's primarily for web, SSR, or React Native, which obviously focuses on kind of like native apps. But we need to support a really vast ecosystem that can power various functions of the business and looking for tools that really do that well as a whole. I would say probably the biggest difference, though, that we've seen is the on-call obligations. Typically, in open-source, there is no on-call obligation. You open a git issue, and it might get responded to, so on and so forth. But for us on the Inver team, considering we support, I think, 150,000 employees currently, we have a really high expectation on getting these on-calls closed, whereas, you know, most of them are solved within 24 hours, and almost anything, anywhere, is solved within one week. So, that's quite a high demand and pressure.

So, kind of why did we end up looking at writing a custom bundler? So, one, things were getting complicated. Lots of these various tools, ecosystems, products, designed to support very different functions of the business, from native to embedded systems to custom hardware development, web, hybrid web, so on and so forth, it continues to grow. Finding a suitable solution was quite tricky, especially if we're looking at things that work really, really well with native, that are quick and fast, but also, perhaps, work in very exotic environments. I don't know if anybody has ever worked with a WeChat app or something like that, but they are non-JavaScript-based, so you're kind of writing JSON modules, and you would need a compiler that can emit, essentially, JSONs instead of emitting JavaScript. So, really exotic environments we also had to try and work within. So, obviously, what we're looking for is something that's gonna support the business really well, something that's stable, something that's gonna be efficient, something that can scale, especially as the number of products under the portfolio continue to grow. And a big challenge we had seen is a lot of our on-call obligations usually went to debugging problems in the build, so trying to reduce that on-call pressure was very critical for us. So, how did RSPack actually get started? It was actually started with something we call Link Speedy, which is essentially our in-house cross-platform development solution. I think we're open-sourcing it in 2025, early 2025. We're planning to open-source it. But the origins of RSPack really started here with Links. Initially, what it was is actually, it was just called Speedy, and it was basically an ES Build plugin, and that was it.

3. The Development of RSPack and CI Optimization

Short description:

Speedy worked well for cross-platform apps, but had limitations when used with web-slash-native frameworks. RSPack was developed in Rust to improve on ES Build's capabilities, fixing bundle splitting and hot module reloading issues while adding common JS support. Extensive market research and analysis of other bundlers were conducted, along with a focus on reducing CI wait times and maximizing product velocity.

And it worked really well at making our cross-platform apps build very quickly. These apps didn't have a whole lot of constraints at the time because of how Links was just built. We didn't support something, say, like chunk splitting or things like that at the time. So it was easy to just send a big payload all in one go, and Speedy did a really good job. However, when we started trying to use Speedy on something like PIA, which is our kind of like web-slash-native framework, rather than full-native, we would run into big issues there, mostly because ES Build would create too many granular chunks, and HMR and other kind of, you know, waterfall issues started to impact the production, you know, optimization that the user would experience, as well as we would see challenges with, like, HMR and stuff like that, as, you know, I think the average code base is like 50,000 modules or up. So it became a little bit challenging.

So what we ended up doing is looking at, okay, since this isn't working, maybe we need to look at a more custom-based solution. So it went from Speedy and really kind of shifted into what RSPack looks like today. So changing it over, rewriting it in Rust. I think one of the main motivations behind Rust was just, back in, I think this was 2018, 2019, it was, you know, cool, and we wanted to try a new language, and Rust seemed to be interesting. So we decided to write it in Rust. A lot of the development really was looking at, okay, ES Build did a lot of things that we really liked, but there were also some aspects that we would like to improve. So we were looking at a design initially around, you know, what would a mashup of rollup and ES Build's capabilities look like? If you actually look at our legacy branch on the repo, you will see the original design, which is very ES Build-based. And so, really, the limitations we're trying to go after is not rewrite Webpack, but just fix the problems that we had in ES Build. But as kind of iterations went on, the main things we were trying to get out of here was, well, one, fix the bundle splitting, two, address the hot module reloading issues, and something that rollup didn't have was the common JS support. So we wanted to make sure that we had these three kind of critical aspects available to us.

In the process, before we just, you know, kind of decided, hey, let's write something in Rust, we did quite a lot of market research. So our Infra team basically took every bundler that has been written on the market, tore them all down, we rewrote quite a few of them in the process, looking at how they worked, how they're architected, pros and cons, you know, basically everything that we would need to know. We spend time just tearing down everything, even looking into things like the Rust Analyzer and how the Rust Analyzer works, looking at how TypeScript works, you know, TypeScript's TS Checker and all of that works to get a very good understanding of just compiler architecture in general. So one thing we also had noticed, just because we obviously try to make data-informed decisions when working on our infrastructure, was what is the kind of, I just referred to it as like the product velocity latency or product latency, but correlated to the amount of time we spend waiting on CI. So what was quite interesting, you can see is if your CI time took above 35 minutes or so, what you're looking at is about a 16-hour latency between merge versus if it was five minutes or under, we would be seeing that most merges would happen within the next 20 minutes or so. So that's quite a drastic change in terms of just product velocity, how quick things can actually move through, because what I'm guessing is, you know, if it takes 40 minutes, you get distracted, you go do something else, and then you forget about it. And then again, this latency between opening the PR and getting it merged, generally had a really strong correlation to how long the build would take. So what we'd also seen though, is once you get into that kind of five-minute zone for like a prod build, the ROI or the value pretty much dropped off. So we found that a one-minute build versus say a four-minute build had virtually no difference in value. So this also gave us a good understanding of like, when do you start reaching law of diminishing returns, at least, you know, through this dimension. So anyway, like I said, we tore down pretty much every bundle that we could get our hands on, rewrote a couple of them. And in the process, we kind of identified things that we really liked and things that, you know, maybe we didn't quite like so much, or just some weaknesses we had found. I think Rollup was one, again, we were inspired by it quite early on. And one of the aspects really was, you know, it's a very clean, minimalistic bundle output that it creates, it's quite a streamlined experience.

4. ESBuild Weaknesses and Limitations

Short description:

ESBuild had good tree shaking and dead code elimination, but struggled with CommonJS support. It was slower than Webpack, lacked HMR support, and had limitations in client-side performance and chunk control. Module federation implementation was challenging due to ESBuild's constraints.

It has really good tree shaking, dead code elimination, things like that, it did a very good job at. But the challenges we would run into would be something like the CommonJS support. How everything works, and actually most bundlers do struggle from this. I think ESBuild's one of the exceptions, and, you know, Webpack and Parcel, but a lot of build tools are ESM only.

So a big challenge we would run into is if you have anything that's CommonJS, you have to take your non-strict CommonJS and convert it into strict ESM. And this caused various challenges in Interop and just other problems in general with the constraints there of everything having to be first-class ESM. Another big weakness really is it ended up being slower than Webpack, mostly because at least Webpack had a cache, and without a cache building a 50,000 module application, you're still building it in JavaScript. No real support for HMR, which was definitely painful, and, you know, the watch mode performance wasn't exactly meeting the expectations that the business had wanted out of the tool.

Looking at ESBuild, the strengths that it did have was that CommonJS support, and it had a really exceptional build speed, but the API was really restrictive, the plug-in ecosystem as well. We, you know, experienced a lot of challenges with something like the client-side performance, creating very long waterfalls of chunks or running into various edge cases with, like, tree-shaking of CSS. Not much control over the actual chunking process, which seems, you know, seems like something small, but when you're dealing with a 800-megabyte, one-gigabyte, you know, optimized build output, this becomes a really big constraint. And again, the HMR was also very painful to work with, causing, you know, a full-page reload or something like that. And I think probably one of the biggest motivating factors was we also use module federation quite heavily. I think we probably have the largest implementation in the world of it. So trying to implement those type of advanced capabilities was very difficult given the constraints with ESBuild.

5. Webpack Limitations and Dual-Engine Challenges

Short description:

Webpack had the black-box effect, lacked visibility, and caused on-call pressure. Improving build speed with cache and thread loaders had limited success. Exploring a dual-engine approach with VEET faced challenges with plugin interference, performance issues in development mode, and inconsistency. Roll-up also had performance issues and lack of persistent caching. The chosen architecture for RSPack was based on the need for a battle-tested solution, despite the risk of developing new code and architecture.

That said, Webpack wasn't all that great either in certain aspects. And if anybody's used Webpack, I'm sure you felt some of the pains of it. One of the big issues really is the whole black-box effect. If something goes wrong, it's very hard to understand what or why. You know, why is the bundle getting bloated? You know, there wasn't a whole lot of visibility. It also caused a lot of on-call pressure. Like I said, a lot of our on-call time had been dedicated to build-related issues. So we were able to at least solve, like, the debugging problems with releasing rsdoctor, but that really didn't help with us with our performance constraints.

So, again, handling a really large project was very hard to do. We tried, you know, quite a few things to improve just the build speed of Webpack. Cache loader, thread loader. At one point I tried replacing the parser with something that was based in Rust, but ultimately there wasn't a lot of success in that approach. With the cache enabled, one challenge we'd run into as well is if you try to persist it on CI or, you know, other things like that, you would get these kind of mismatches where it wouldn't actually rebuild correctly. Sometimes the cache kind of gets stuck.

So another approach that we ended up looking at at some point in this process was, you know, I think this is a, again, 2019, 2020-ish era. So VEET was obviously becoming quite popular, and that was something that we had looked into. Hey, could VEET work? Could a dual engine approach work using the kind of stability of Webpack's build and production and using the faster development experience of VEET in development? But big problems we'd run into are, I think, ones you would expect. Any dual-engine build system struggles with the plug-in interrupt, makes it more complicated to maintain. Various performance issues we would run into in development mode because of, like, the unbundled solutions.

So, you know, I think some of the largest ones we had seen was a hot reload would take 10 minutes just because you're downloading, you know, 15,000, 20,000 modules in the browser. Every time it has to do a refresh. So various constraints there that I think larger projects have experienced. And then a roll-up again, we had the performance issues and lack of persistent caching still, which created a lot of constraints. I think the biggest challenge we've seen is really the inconsistency of a dual-engine approach. If you're using something in development different from production, it's very hard to ensure that these are going to be, like, consistent and what you get here is what you expect on the other end. So the kind of evolution that we went through and why we had chosen Webpack for the architecture of RSPack really came down to we needed something that we knew worked, that was guaranteed to work, and that was battle-tested. A lot of the development we had done along this process, we had ended up basically trying to now rebuild things that Webpack already had. So the only difference, though, is we didn't have a decade's worth of tests. It was all net new code, new architecture that these bundlers in their original forms didn't support. So we're kind of flying blind and, you know, that obviously introduces a lot of risk.

6. RSPack: Rust-Based and Language-Agnostic Approach

Short description:

The idea behind RSPack was to move Webpack into Rust, aligning with other Rust-based projects like Ploy and Rolldown. The main priorities were artifact integrity, performance, productivity, and business value. The goal was to make it fast, make it work, and then make it faster. RSPack also adopted a language-agnostic approach, inspired by Webpack and Parcel, providing scalability and the ability to extend the compiler to support additional languages.

So if we're just ending up recreating Webpack anyway, what if we just took Webpack and moved it into Rust instead? And so that was kind of the idea that we had, you know, gone for. So that's kind of how we ended up into what RSPack more or less looks like today.

Now, does anybody know what these have in common? You can just shout it out if you do it by any chance. Okay. So main things that they have in common, well, one, they're all Rust-based. But two, these actually, their origins can be traced back to ByteDance. So Ploy was actually one of the competing projects with the RSPack team. Now it's actually known as Farm, and the author of Ploy went and released it independently. And the rest of the Ploy team kind of merged together into RSPack. Rolldown was also started elsewhere by, you know, independently as a side project from one of the members who has now gone off and joined the Vite team, taking Rolldown to, you know, the Vite end.

So, you know, some of the things really for our priorities of what we were looking for is, above all else, artifact integrity. We need to guarantee that this is always going to work, that we're not going to get any surprises in here considering the risk that the business has and the size of the apps that we support. There's, you know, billions of users using these, hundreds of thousands of employees working on these applications. So we needed something that's going to really help us get our performance in a good place, think about the cost of change management, think about the productivity improvements, and, you know, essentially looking at business-oriented metrics for how this bundler should work to really drive the business value above all else. And long term, you know, making reliable, safe bets on what we're picking on. One of the challenges I think we've learned with Speedy is when we first started building Speedy, ES Build had, I think, quite recently come out and it had a really, really active development cycle. So we thought it was a pretty good bet that a lot of things like HMR or chunk splitting or things like that would most likely get resolved. But I think as time went on, maybe priorities kind of shifted there, but some of that didn't happen. So this was, again, one of the big things we took into account is, you know, what are the risks when you are trying to go for, say, a community-oriented project? And again, on-call pressure, things like that, it just does create a different level of expectations. So really, our kind of goal here was make it fast, make it work, and then we'll make it faster.

So one other aspect, really, that we admired about RSPack Webpack is the kind of language-agnostic handling. So, like I said, a lot of tools are usually, like, ESM-only. Everything gets converted into ESM and that's how the parser works and that's how you get whatever you need. But again, complications around the strict, you know, non-strict to strict conversion of common JS or other module formats, if you're dealing with mixed module systems, it also creates additional complexity as kind of quarks start to show up. And, you know, generally, we just had problems with scalability of, you know, of only being able to use ESM and convert everything to ESM and then back out into something else. So I think one thing to call out here is Parcel probably doesn't get enough credit for this, but Parcel's also a language-agnostic approach. I believe the only reason that RSPack didn't actually model itself off Parcel's design was we just happened to know Webpack's APIs better. But it really could have been either because they have a really, really strong architecture under the hood. So the main thing, though, with this language-agnostic approach is it gives us a lot of room to extend this compiler out to do more. We're not just stuck with, say, oh, well, it all has to turn into ESM, but we could, you know, bring in, say, like, CSS is a first-class language support or TypeScript is first-class language support, not have to rely on, like, a loader or a separate transform step that the bundler wouldn't necessarily know about.

7. API Design and Balancing Performance

Short description:

API design focused on a robust JavaScript plugin ecosystem, performance, and composability. The approach was to strike a balance between performance and versatility, ensuring a wide support tier. Tailored for business and enterprise, the goal was to find a solution that works well for the problems faced by larger businesses.

So again, API design, what were the key features we're looking for? Something that had a really robust JavaScript plugin ecosystem. This really helps us with, like, the business side of things, making sure it can do anything it needs to do. Performance, obviously, making sure it's efficient, that it can perform well. And, like, the composability of it, which is another major one, making sure that whatever we're going to build today is actually going to be able to do whatever the business and product teams want it to do in the future. And those are some pretty crazy asks, ideas, things like that.

So trying to build something that would guarantee, you know, as wide of a kind of— as wide of a support tier as we can get to cover as much of the business under one solution. So obviously the approach we ended up taking was let's look at striking a balance between performance and versatility. It should be fast enough. And we know that that window shows once we start going, you know, into that five minute and under range, the ROI of speed begins to drop off, assuming your HMR is still quick. So, again, how much can it do versus how fast is it? And what are you giving up through chasing speed in terms of using all that power and rust to do more complicated stuff, to do more optimization, to, you know, just take advantage of the horsepower? So trying to find that good balance was really important to us.

And obviously everything here has been very tailored for business and enterprise. The users, use cases, stuff like that. Like, again, our own use case, we need to support, you know, billions of users working on this thing with various places of the world, various kind of technologies that they have, various environments. So trying to find something that really worked well to the kind of problems that larger businesses have was a primary, you know, goal for us. So anyway, we did end up getting there and we have rolled it out quite broadly. And we also have seen some kind of return on investment from doing all of this.

8. Cloud Cost Reduction and Future Ideas

Short description:

Significant reductions in cloud costs and build times. Substantial savings in CI hours and financial benefits. Improved product latency and bandwidth savings. Future ideas include TypeScript optimization, remote caching, and export-level code splitting.

So, you know, just some general things that we had seen which I really liked was probably the biggest one here was a 80% reduction in cloud costs across the board. So when we rolled this thing out en masse, our CI costs, most of our cloud costs were reduced down by 80%, which was a staggering number at a company of that size. On average, we would see about a 30-minute reduction in our build times. And on CI-wise, we would see, I think it was per project per day, about 200 CI hours saved.

So each project a year would save, you know, let's say 80,000 hours a year in, like, CI hours. And for a developer, in working in, like, your development mode locally, assuming you might have to switch a branch here and there, we kind of just put it as if you have to do five, you know, restarts of your dev server, we've, you know, worked out that, okay, you know, roughly you're looking at about seven hours a week saved per developer in the company or, you know, roughly 400 hours a year per dev. On the financial side, though, we also saw some great numbers. So for our own personal use, we've seen about $200 million a year savings from the project or for what we're putting into it versus what we're getting out of it, about $200 million a year.

We've had outside users who have also adopted this, and they've been kind enough to give us their own financial information about the reductions. And we're seeing, you know, again, for another user, $32 million a year. And what I had really found impressive was, considering we were looking at that medium time to merge, which we kind of treated that as like product latency. Between creating something and getting it to production, what's the latency between that? And we have, you know, most of the builds are really massive, so we see the CI taking upwards of 35 minutes. So, you know, per year, just on a single repo, we were able to recuperate over 1.6 million hours in product latency.

So that's the time from getting the PR to getting it into production. And that's just one of the repos that we've got among the, you know, thousands that are there. So there was really a lot of value that we ended up seeing from it, which is obviously great for us to see our theories had kind of worked out. Another aspect that we've seen a lot of return on is actually the bandwidth saving, since a lot of what we've focused the project on is really good optimization, really good chunking, intelligent kind of methods on how to split the app up, instead of just, like, Webpack, super verbose kind of solutions there, try to refine them a little bit more.

And, you know, we've had users report up to a 50% savings in egress bandwidth on their infrastructure, which for them also translated into a couple billion dollars a year, which, again, was quite nice to see. So just to kind of wrap it up here, some of the future ideas that we have been looking at with RSPack is, again, that language agnostic solution is going to be very useful for TypeScript. We're looking at making TypeScript a first-class citizen, which would allow us to do things like link time optimizations, being actually able to use all the typing information to further optimize, tree shake, remove dead code on, say, like, private methods that aren't exported out, things that usually get lost in the transpilation process.

There is the possibility of looking into, like, integration of type checking into the bundler or creating something like a language server since the difference between, say, bundling and type checking are actually very few. There's a lot of overlaps in how they're built. Some other things we've kind of been looking at is you've probably seen some of these features from, like, TurboPack that they've been speaking about. One of them would be, like, the remote caching at the function level. So that is something we're planning to open source and release for you to self-host for RSPack, I think probably in the next quarter or two. Can't remember the exact timeline on there, but it'll essentially mean anybody who's building your application, you can recycle their caches and have a distributed remote cache that everybody can take advantage of free of charge. And another big one would be export-level code splitting.

Usually when you split your application right now, you're splitting it based on, what do you call it, you're splitting it based on the module. The whole file with whatever exports are in there can be sharded off. But what we're looking at doing is actually export-level, being able to take an export and relocate it based on how it's used, which will give us a lot more cross-module optimization and just a lot more, you know, better output of our artifacts, smaller payloads.

9. RSPack v2 and Choosing a Bundler

Short description:

Looking toward RSPack v2, we experimented with dropping Webpack bindings and using Unplugins API instead. The build times were 100 times faster. RSPack and TurboPack have similarities, but TurboPack has a different hot reloading system. RSPack plans to align with TurboPack's architecture. When choosing a bundler for smaller companies, consider performance, chunking, comfort, reliability, safety, problem-solving speed, and user experience.

And then looking toward, like, RSPack v2, I think what we've discovered is, while it's quite fast, it could definitely be a lot faster. And one of the experiments that we had done to just check it out was, what if we drop the bindings of Webpack off of it and we instead used, I think we used Unplugins? So we used, like, the SDK that we've been working on for, you know, essentially allowing you to roll your own compiler. And we tried using Unplugins API as the main bindings for RSPack. In doing so, we had about one-tenth capability just because the API surface is obviously smaller. But we also saw that the build times were about 100 times faster. So it was very interesting to see the differences there.

So anyway, thank you for coming here. And if you do have any questions, you can always find me around. If you want to see any more about RSPack, just scan the QR code, and that'll take you to the landing page. So we got the first question right here. People want to know, how do RSPack and TurboPack compare? So I would say probably the... I mean, they both... There's a lot of things shared in common. Like, if you look at the underlying output of them, you'll see that they both have this kind of concept of a Webpack runtime. In Webpack, you know, the runtime is kind of a, like, function prototype on Webpack require. And TurboPack just sends an object that you destructure the methods off of. But they both have a ton in common in terms of design. I would say one of the largest differences, though, is really TurboPack's hot reloading system. So TurboPack is leaf up, whereas Webpack's HMR goes from, cool, what's the parent node, and rebuild it down. So there's some speed differences in HMR. But I think also in general, as we move toward V2, we are planning to align with TurboPack's architecture. So whatever you can expect there, you can probably expect coming out of RSPack as well. That makes sense. Thank you. All right. The next one, three votes. For a smaller company that doesn't have the scale problems as ByteDance, what are the most important things to consider when choosing a bundler, in your opinion? And this is the last question of the day. Okay. I would say probably the most important thing is, like, what do you need the tool to do? You know, does performance really matter? Do you need, you know, is chunking going to be an issue, which, you know, we've seen a lot of this of, but, and also just what are you comfortable in? How much do you need the tool to use, and kind of pick whatever you think is going to do everything that you need it to do comfortably. Obviously, these tools are more, you know, catered toward as the company grows, or as requirements grow, you've got near limitless room to move in. But in general, what I'm looking for is, what's going to be reliable? What's going to be safe? What's going to get the problem solved quickly? And what's going to produce a strong user experience? Because really, it doesn't matter how fast it is for me, if I'm not able to create a highly optimized output for the end user, because that's the revenue funnel. The developers, it's nice if it's fast for us, but we aren't the main revenue funnel. So those are the main things I kind of look at. Sounds good. Sounds like safety, reliability, and your use cases. Well, that's all the time we have. As a reminder, if you have more questions, you can corner him right outside and make sure he doesn't leave until he answers your questions. Thank you.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

The Core of Turbopack Explained (Live Coding)
JSNation 2023JSNation 2023
29 min
The Core of Turbopack Explained (Live Coding)
Tobias Koppers introduces TurboPack and TurboEngine, addressing the limitations of Webpack. He demonstrates live coding to showcase the optimization of cache validation and build efficiency. The talk covers adding logging and memorization, optimizing execution and tracking dependencies, implementing invalidation and watcher, and storing and deleting invalidators. It also discusses incremental compilation, integration with other monorepo tools, error display, and the possibility of a plugin system for Toolpag. Lastly, the comparison with Bunn's Builder is mentioned.
Rome, a Modern Toolchain!
JSNation 2023JSNation 2023
31 min
Rome, a Modern Toolchain!
Top Content
Rome is a toolchain built in Rust that aims to replace multiple tools and provide high-quality diagnostics for code maintenance. It simplifies tool interactions by performing all operations once, generating a shared structure for all tools. Rome offers a customizable format experience with a stable formatter and a linter with over 150 rules. It integrates with VCS and VLSP, supports error-resilient parsing, and has exciting plans for the future, including the ability to create JavaScript plugins. Rome aims to be a top-notch toolchain and welcomes community input to improve its work.
Server Components with Bun
Node Congress 2023Node Congress 2023
7 min
Server Components with Bun
Top Content
Bun is a modern JavaScript runtime environment that combines a bundler, transpiler, package manager, and runtime. It offers faster installation of NPM packages and execution of package.json scripts. Bun introduces a new JavaScript and TypeScript bundler with built-in support for server components, enabling easy RPC with the client. This allows for code splitting and running code that streamingly renders React or any other library from the server and mixes it with client code, resulting in less JavaScript sent to the client.
Challenges for Incremental Production Optimizations
JSNation 2024JSNation 2024
32 min
Challenges for Incremental Production Optimizations
TurboPack is a new bundle similar to Webpack, focusing on incremental builds to make them as fast as possible. Challenges in production builds include persistent caching, incremental algorithms, and optimizing export usage. The compilation process can be split into parsing and transforming modules, and chunking the module graph. TurboPack aims to achieve faster production builds through incremental optimization and efficiency. Collaboration and compatibility with other ecosystems are being considered, along with the design of a plugin interface and tree-shaking optimization.
Parcel 2: the Automagical Bundler
DevOps.js Conf 2021DevOps.js Conf 2021
8 min
Parcel 2: the Automagical Bundler
Parcel 2 is a ground-up rewrite of Parcel 1, a fast and scalable zero-configuration web application bundler used by large companies like Atlassian and Adobe. It offers a zero-config approach with good defaults, making it production-ready out of the box. The new features include a revamped plugin system, a configuration file, transformers for file conversion, optimizers for code compression, target support for different browsers, diagnostics for error debugging, and named pipelines for data and JavaScript in different formats. Parcel 2 also supports different import scenarios, such as importing JSON files with named pipelines and using query parameters for image optimization. It includes various performance improvements, stable caches, optimized data structures, enhanced code splitting and bundling, improved scope hosting, and better support for monorepos and libraries. A React example is provided to showcase the simplicity of Parcel and how to use it with React.
Bundlers: A Deep Dive into Modern JavaScript Build Tools
JSNation 2025JSNation 2025
20 min
Bundlers: A Deep Dive into Modern JavaScript Build Tools
Edoardo, DevRel at Storyblok, explains the importance of JavaScript bundlers and discusses Storyblok's migration to Vite. Challenges with old JavaScript applications are illustrated, emphasizing issues with global variables and dependency control. Optimizing JavaScript module loading through ES modules is discussed, highlighting browser compatibility and performance concerns. The process of creating and structuring JavaScript bundles is detailed, focusing on dependency graphs and module organization. Techniques for managing bundle execution, utilizing abstract syntax trees for code parsing, and implementing optimization strategies are explored, with a specific emphasis on Vite, hot module replacement, and development enhancements.