The Oxc Project, and the Effect of Performance Engineering

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

The Oxidation Compiler project is creating a collection of high-performance JavaScript and TypeScript tools written in Rust. It offers fundamental components like parsers and resolvers for developers to utilize, along with CLI applications such as linters and formatters. This talk will introduce the project and explore the impact of a strong focus on performance engineering.

This talk has been presented at JSNation 2024, check out the latest edition of this JavaScript Conference.

FAQ

The OXC project is a collection of JavaScript tools written in Rust. It includes a parser, linter, and modular resolution tool called the resolver, which are significantly faster than their counterparts. The project also supports bundlers like Roldown and RSpec.

The OXC project was started and is led by Baotian, a frontend configuration engineer with a decade of experience in configuring JavaScript tools.

OXC's parser is currently three times faster than the SWC parser.

OXLint is 50 to 100 times faster than ESLint, depending on the number of CPU cores used.

The resolver is a modular resolution tool in the OXC project that is 28 times faster than Webpack's enhanced resolver.

OXLint performs cross-file linting in parallel and shares Abstract Syntax Trees (ASTs). It can complete linting of large codebases, like the VS Code repository, in less than a second.

Baotian's focus on performance engineering began during the lockdown when he was using a slow laptop. He aimed to improve the speed of JavaScript tools to make development more efficient.

The OXC project treats all performance issues as bugs. This includes not only program execution time but also compilation speed and continuous integration time, aiming to ensure everything runs as fast as possible.

OXC uses GitHub Actions to run tests on multiple platforms (Windows, Ubuntu, macOS, and WebAssembly), completing all jobs within three minutes. This fast feedback loop helps contributors work more efficiently.

Performance is considered a critical aspect of the OXC project, affecting not just speed but also correctness, testability, maintainability, reliability, and usability. High performance enables faster development cycles and better software quality.

The OXC project has been endorsed by notable figures such as Ivan Yu, Joe Savanna (lead of the React team at Meta), Eric Simons (CEO of StackBlitz), Miles (creator of Moon repo), and Jason Miller (Shopify DX and Creator of Preact).

Boshen Chen
Boshen Chen
18 min
17 Jun, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
The Talk discusses the JavaScript oxidation compiler (OXC) project and the impact of performance engineering. The OXC project consists of JavaScript tools written in Rust, including a parser, linter, and resolver, that are significantly faster than existing alternatives. Testimonials highlight the progress of the OXC project and the speed and effectiveness of the OXLint tool. The emphasis on performance in OXLint is demonstrated through cross-file linting and parallel processing. Performance improvements in the OXC project are achieved through benchmarking and can drive innovation in JavaScript infrastructure. The talk also discusses the need for faster website loading and the goal of creating a new minifier for better compression and performance in OXC.

1. Introduction to OXC Project

Short description:

In this talk, I'm going to talk about the JavaScript oxidation compiler and the effect of performance engineering on this project. For the last decade, my job has been a frontend configuration engineer. I began working closely with JavaScript tools written in Rust, including Roldown, RSpec, BIOME, SWC, and RBRs. I started and became the lead of the OXC project, a collection of JavaScript tools written in Rust. The completed tools are the parser, linter, and resolver, each significantly faster than existing alternatives.

Hello, everyone. In this talk, I'm going to talk about the JavaScript oxidation compiler and the effect of performance engineering on this project. To start things off, my name is Baotian.

For the last decade, my job has been a frontend configuration engineer. I configured lots of JavaScript tools such as Grunt, Go, Webpack, and you name it. As part of my interest in experimenting with new programming languages, I began working closely with JavaScript tools written in Rust. These projects include Roldown, RSpec, BIOME, SWC, and RBRs.

In the meantime, I started and became the lead of the OXC project. So what is the OXC project? It is a collection of JavaScript tools written in Rust. Some parts are standalone and some parts are support other projects. The completed tools are the first three. The parser, which is currently three times faster than SWC, the linter, which is 50 to 100 times faster than ESLint, depending on the number of CPU cores you use, and a modular resolution tool called the resolver, which is 28 times faster than Webpack's enhanced result.

2. OXC Project Progress and Testimonials

Short description:

Next three things in the works: formatter, transformer, and BigBoss modifier. OXC also supports Roldown and RSpec bundlers. Testimonials from Ivan Yu, Joe Savanna, Eric Simons, Miles, and Jason Miller. OXLint's speed and effectiveness praised by users. Demonstration of OXLint's performance and diagnostics. Importance of bug-revealing rules and cross-file linting in OXLint.

Next three things are what we are working on right now. A formatter, which is going to be pretty compatible, a transformer or transpiler that is going to be bevel compatible, and lastly the BigBoss modifier. And finally, OXC also supports the rising stars, the Roldown and RSpec bundlers.

It's rather hard to show why OXC is the next big thing, so I will let these people do the talking for me. Ivan Yu was amazed by the speed of OXLint, which is a linter for OXC. He ran it on the Vue repo and took 50 milliseconds. Joe Savanna, who is the lead of the React team at Meta, showed interest in the project and found it pleasing. Eric Simons, CEO of StackBlitz, also recognizes that it may be the next big thing. And Miles, the creator of Moon repo, was amazed by OXLint again.

Lastly, we have Jason Miller, Shopify DX and Creator of Preact, who said the following. OXLint has been a massive win for us at Shopify. Our previous linting setup took 75 minutes to run, so we were finding it out across 50 past workers in CI. By comparison, OXLint takes around 10 seconds to learn the same code base on a single worker and the output is easier to interpret. We even caught a few bugs that were hidden or skipped by our old setup when we migrated. And after a few months later, I talked to Jason again and he said they probably saved millions of dollars on infrastructure after they switched.

Let me quickly show a demonstration of the linter. Here we have OXLint running in the VS Code repo and on my Mac Pro, it completed 4.8k files in 850 milliseconds with the default 90 utilizing all cores. Yes, the linter finished the whole VS Code repository in less than a second. Now let's look at the diagnostics, where for each rule we try to pinpoint the exact same exact issue. Sometimes you don't even need to read the error message to understand what's wrong with the code. The first rule, no const binary expression, is my favorite rule, which has been inside ESLint version 8 for more than a year now. This rule could have cost so many bugs in the past year if it were turned on by default in ESLint when it was first introduced. But unfortunately, it is a breaking trend to turn on new rules, so it has to be introduced in a major version and was only turned on by default in version 9, which was released in April. In my opinion, one of the major tasks of the linter is to view hidden bugs, so such bug-revealing rules should be turned on by default as soon as possible to help people ship better code. Users of OXLint have been enjoying this rule since the beginning. And like for example, for this rule, it's pretty obvious that, but it's not obvious that the knowledge operator has a lower precedent. So to fix the code, you actually need a parenthesis over here. And for these rules, if you just look at the red line, you'll probably understand what's wrong with the code and fix it. What I'm really excited about OXLint today is that we can perform cross-file linting. This means we can implement rules from ESLint plugin import, which are notoriously slow if you enable certain rules, such as the no cycle rule on the left-hand side of the screen.

3. OXLint Performance and Project Principles

Short description:

OXLint's cross-file linting is done in parallel, with shared ASTs. Impressive performance shown in running the no cycle rule on the VS Code repository and a large internal repository. Catch more bugs in seconds, saving time and resources. The project's focus on performance and the principle that all performance issues are considered bugs. Demonstration of fast runtime in the OXE repository's GitHub actions page.

However, in OXLint, cross-file linting is done in parallel and ASTs are shared. So the only overhead we encounter is to wait for dependent files to finish passing. Once again, in the VS Code repository, completely running the no cycle in less than a second, which probably take a considerable amount of time with ESLint plugin import. And I think the diagnostics is a little bit better, which shows what the cycle is. If linking the VS Code repository under one second is not impressive, I also stress tested on the large internal repository, it completed 122,000 files in 3.4 seconds. What this entails that if you put all your companies or your own projects repositories side by side, and then run OXLint in the parent tree, you should be able to learn all your code in one go in a couple of seconds. This is great because with every OXLint upgrade, you will probably catch a few more bugs for your entire company or your projects in a few seconds, saving a lot of maintenance burden and infrastructure money.

So how did it get started? Well, my focus on performance started two years ago during lockdown. I was actually left with a super slow laptop, the Intel i5 with only 8 gigabytes of RAM. Everything was so slow. I had nothing else to do, but this laptop put my time perspective in slow motion. At that time, I discovered, and at that time, I discovered the Bion project, which was called Roam at that time. I brought in the whole integrated tool chain concept of frontend tools, but I was faced with two problems. One is the slow computer problem, and the other one is actually imposter syndrome. I didn't know what I was doing, so I eventually ended up learning from scratch. From learning what an ASC is or what an imposter is, I wasn't even good at Rust at that time. I just kept on learning and persisted with adding more code. And now, many months later, when I was out of lockdown, I discovered this. I somehow created the fastest JavaScript parser written in Rust and learned her with unimaginable speed. Performance plays such a big role in the project. So I eventually came up with this principle after getting inspired by a few community members. All performance issues are considered as bugs in this project. So performance does not mean just program execution time, but also compilation speed and continuous integration time. Everything that feels slow should be broken as a top priority.

Let me demonstrate these concepts with the OXE repository. This is the GitHub actions page showing the runtime of all the jobs. It runs tests on Windows, Ubuntu, macOS, and WebAssembly, and then checks the health of the codebase. All of the jobs are completed within three minutes or so, which I believe is faster than most of the larger Rust projects and some of the larger JavaScript projects as well. So I think for a project to sustain well, it needs to provide contributors a very fast feedback loop when they are unfamiliar with the project. And I have a story to tell.

4. Performance, Benchmarking, and Properties

Short description:

Working on the ISPAC project, I improved the CI time from an hour to five minutes. Slow CI times cost human time, and making it faster may cost a lot of money. Using the benchmark setup with Cosby, we achieved mind-blowing performance improvements in the OXC project. Performance is often neglected but can provide needed properties like correctness, testability, maintainability, reliability, and usability. Our tools pass more than 99% of the test cases, making the parser ready for production. Performance can also drive innovation.

As you may know, I also work on the ISPAC project. The CI time was around an hour when I joined, and it took me a month to get down to 20 minutes. I had no more tricks up my sleeves. I tried to fix every code issue that existed, but then I eventually gave up because there was nothing else to fix, or it's way too hard to fix. So the final thing is I convinced my managers to throw money at the problem and brought the CI time to five minutes. So slow CI times cost human time, and making it faster may actually cost a lot of money.

We also have a benchmark setup called continuously benchmarking using Cosby as the tool and platform. What Cosby does is it records metrics such as CPU cycles using Valgrind, and then it computes a speed number and stores it in a database for error commits. This makes benchmarking reliable, removing computer hardware out of the equation. This screenshot shows one of the most mind-blowing performance improvements in the history of the OXC project, which made our parser from two times faster than FWC to three times faster, and running all our test benchmarks, test cases, which there is around more than a dozen test cases, and they are all done in five minutes. Yes, when you push code to the OXC repository, you will get this benchmark result within five minutes.

Performance is actually a very vague concept when we work on it. It is often neglected and is the last thing to consider for a project. It's hard to convince people to work on performance as a priority until it becomes really slow, mostly because of the famous fallacy. The out-of-context code premature optimization is the root of all evil. But this all changed when I started watching the MIT OpenCourseWare performance in terms of software systems. In its first lecture, it stated the following. Performance is the concurrency of computing. Often, you can buy needed properties with performance. It's really hard to understand what is needed properties. Well, there are actually things like correctness, testability, maintainability, reliability, usability, all sorts of things that comes with creating a software. Performance is one of them.

To demonstrate this concept, here we have a conformance sheet testing all of our tools against test262, Babel, and Hydro test sheets. It essentially checks the behavior of our tools against our predecessors to make sure we conform to the same behavior. Here in this screenshot, we completed running more than 50k test cases in only two and a half minutes. I think test execution time becomes a bottleneck for logical basis when you get to this many test cases. But in NeoXe, since everything's fast, completing the tests are also faster. If you aren't aware from the numbers in the screenshot, it currently passes 99% of the test cases. This means the parser is ready for production and you can use it. When I talk about buying properties with performance, I believe we can also buy innovation with performance.

5. Performance as a Path to Innovation

Short description:

Working on JavaScript infrastructure, I found the need to compile and ship transpiled JavaScript to users, sacrificing performance. Jared Sumner's project, BAN, allows bundling on demand with caching, leading to faster websites. Our current JavaScript minifiers, SWC, ESBuild, Go tools, and AgilifyJS, show interesting capabilities. As codebases grow larger, existing tools become slow. Google Closure Compiler has great compression but is limited to Google infrastructure. OXC aims to create its own minifier for better compression and performance. OXC now supports Rolldown founder and seeks newer solutions to JavaScript infrastructure.

This comes to our last topic, performance as a path to innovation. When I worked on JavaScript infrastructure, I always wondered why we needed to compile JavaScript and publish to SDNs. We enjoy the last JavaScript features, but they need to be transpiled to an older version, then shipped to the user, which results in a code bloat. And then, slow website. It is where to sacrifice performance for the tail end of users.

We have been doing this since the beginning because performance is actually the elephant in the room. Because transpilation's slow, modification's slow, pushing to CDN and serving files as well. Everything is kind of slow. When Jared Sumner created BAN, his project goal was to create a bundler, not a runtime that we are seeing right now. He independently realized that once peak performance reached, we can just do bundle on demand with a layer of caching. In this way, users using newer browsers will get smaller files, leading to faster websites.

And this is a screenshot of the benification benchmarks for our current JavaScript minifars. We have SWC, ESBuild, and two minifars written in Go, the third one and the fifth one. And one written in Rust, the first one SWC. And then once you're writing JavaScript, the second one, AgilifyJS, enters number six. What's interesting that the current era of tools written in Rust and Go can minify a megabyte of file. But the tools from the past era cannot. For a lot of larger codebases and larger web apps, the amount of code we ship to the user is even larger.

So as our codebases grow larger and larger, especially for large web apps, some of the existing tools become really slow, just crashes out of memory. What's really interesting, though, is Google Closure Compiler on the list. It actually has an advanced mode, which can probably beat other tools in compression size. But unfortunately, it only works on Google infrastructure and nobody else can use it because it's really, really slow. It does a ton of ASD passes in Java. We can't blame them, though. Google Closure Compiler was created so early and Java was the only supported in-house languages. So if OXC gets the chance, all our current work will lead to creating our own minifier, which will aim to have the best compression size similar to Google Closure Compiler with smaller performance to SWC and ESP. So OXC was my independent project for the last two years but will be no longer. The project is now on a mission to fully support the Rolldown founder as well as seeking better and newer solutions to JavaScript infrastructure with Yzero, the company.

And if you have any questions, feel free to visit our website or reach out to me on our Discord or my Twitter. Thank you for listening.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
Utilising Rust from Vue with WebAssembly
Vue.js London Live 2021Vue.js London Live 2021
8 min
Utilising Rust from Vue with WebAssembly
Top Content
In this Talk, the speaker demonstrates how to use Rust with WebAssembly in a Vue.js project. They explain that WebAssembly is a binary format that allows for high-performance code and less memory usage in the browser. The speaker shows how to build a Rust example using the WasmPack tool and integrate it into a Vue template. They also demonstrate how to call Rust code from a Vue component and deploy the resulting package to npm for easy sharing and consumption.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
How React Compiler Performs on Real Code
React Advanced 2024React Advanced 2024
31 min
How React Compiler Performs on Real Code
Top Content
I'm Nadia, a developer experienced in performance, re-renders, and React. The React team released the React compiler, which eliminates the need for memoization. The compiler optimizes code by automatically memoizing components, props, and hook dependencies. It shows promise in managing changing references and improving performance. Real app testing and synthetic examples have been used to evaluate its effectiveness. The impact on initial load performance is minimal, but further investigation is needed for interactions performance. The React query library simplifies data fetching and caching. The compiler has limitations and may not catch every re-render, especially with external libraries. Enabling the compiler can improve performance but manual memorization is still necessary for optimal results. There are risks of overreliance and messy code, but the compiler can be used file by file or folder by folder with thorough testing. Practice makes incredible cats. Thank you, Nadia!
Optimizing HTML5 Games: 10 Years of Learnings
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Optimizing HTML5 Games: 10 Years of Learnings
Top Content
Watch video: Optimizing HTML5 Games: 10 Years of Learnings
PlayCanvas is an open-source game engine used by game developers worldwide. Optimization is crucial for HTML5 games, focusing on load times and frame rate. Texture and mesh optimization can significantly reduce download sizes. GLTF and GLB formats offer smaller file sizes and faster parsing times. Compressing game resources and using efficient file formats can improve load times. Framerate optimization and resolution scaling are important for better performance. Managing draw calls and using batching techniques can optimize performance. Browser DevTools, such as Chrome and Firefox, are useful for debugging and profiling. Detecting device performance and optimizing based on specific devices can improve game performance. Apple is making progress with WebGPU implementation. HTML5 games can be shipped to the App Store using Cordova.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
Workshop
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
High-performance Next.js
React Summit 2022React Summit 2022
50 min
High-performance Next.js
Workshop
Michele Riva
Michele Riva
Next.js is a compelling framework that makes many tasks effortless by providing many out-of-the-box solutions. But as soon as our app needs to scale, it is essential to maintain high performance without compromising maintenance and server costs. In this workshop, we will see how to analyze Next.js performances, resources usage, how to scale it, and how to make the right decisions while writing the application architecture.
Maximize App Performance by Optimizing Web Fonts
Vue.js London 2023Vue.js London 2023
49 min
Maximize App Performance by Optimizing Web Fonts
WorkshopFree
Lazar Nikolov
Lazar Nikolov
You've just landed on a web page and you try to click a certain element, but just before you do, an ad loads on top of it and you end up clicking that thing instead.
That…that’s a layout shift. Everyone, developers and users alike, know that layout shifts are bad. And the later they happen, the more disruptive they are to users. In this workshop we're going to look into how web fonts cause layout shifts and explore a few strategies of loading web fonts without causing big layout shifts.
Table of Contents:What’s CLS and how it’s calculated?How fonts can cause CLS?Font loading strategies for minimizing CLSRecap and conclusion