Scaling React Apps with Parallelism: Patterns for Multi-Threaded UIs

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Unlock the full potential of modern web applications with multi-threaded architectures in React. This session explores strategies for leveraging Web Workers and OffscreenCanvas to parallelise data processing, heavy computations, and UI rendering. We'll discover how to create interfaces that are scalable and unrestricted by conventional JavaScript bottlenecks.

This talk has been presented at React Summit 2025, check out the latest edition of this React Conference.

FAQ

Web workers allow React applications to run heavy computations on background threads, freeing up the main thread for user interactions. This prevents UI blocking and improves the frame rate, making the application more responsive and smooth.

Web workers run in a separate context and do not have access to the DOM or React components. They also cannot access all browser APIs and run in separate memory spaces, which requires careful planning for communication and data transfer.

Offscreen canvas allows rendering tasks like animations and charts to be moved off the main thread, which helps maintain smooth performance even when the main thread is busy. This is particularly useful for real-time visualizations and complex animations.

Common pitfalls include over-threading, which involves unnecessarily moving operations to workers, and managing communication overhead. Developers should also remember to terminate unused workers to avoid memory leaks and implement proper error handling.

Future advancements include shared array buffer with Atomix for better memory synchronization, worker modules for modularity, scheduler API for task prioritization, and WebGPU for direct GPU access in compute-intensive applications.

Multithreading in React involves using browser features like web workers and offscreen canvas to handle heavy computations and rendering tasks on separate threads. This is important because it prevents the main thread from being overloaded, thus avoiding jams, freezes, and providing a smoother user experience.

Web workers should be considered for tasks that take more than 100 milliseconds or block user inputs. Ideal candidates include data processing, complex calculations, and tasks that do not require direct DOM access.

Transferable objects enable zero-copy transfer of data like array buffers to web workers, while shared array buffers allow shared memory usage between threads. Both techniques optimize data transfer efficiency, reducing performance bottlenecks in data-intensive applications.

Best practices include profiling applications to identify main thread blockages, starting with small tasks for easier debugging, implementing feature detection for browser compatibility, and monitoring performance in production for real-world effectiveness.

Shubham Gautam
Shubham Gautam
18 min
17 Jun, 2025

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Introduction to multithreading in React for smoother user experiences by utilizing web workers and off-screen canvas. Tools for breaking free from JavaScript single-threading with web workers and offscreen canvas. Smooth UI performance with web workers, limitations, and offscreen canvas for offloading graphics rendering. Design separation for preventing threading issues, offload CPU-intensive tasks to workers without DOM access limitations. Manage WebWorker communication effectively with promise-based approach for tasks. Enhance data transfer efficiency with transferable objects and shared array buffer. Decide when to use multi-threading techniques wisely; reserve workers for data processing and complex operations. Best practices include terminating workers when not needed, profile applications, and explore future front-end parallelism for optimized user experiences.

1. Introduction to Multithreading in React

Short description:

Introduction to multithreading in React for smoother user experiences by utilizing web workers and off-screen canvas. Overview of challenges faced with single-threaded JavaScript in React applications and the need for a better approach for improved user experiences.

Hey, everyone. I'm really excited to dive into a topic that's quickly becoming a must-have in modern web apps, which is multithreading in React. So as our UIs get richer, maybe think giant data tables, real-time dashboards and canvas animation. Sticking to a single thread can lead to jams, freezes and frustrating user experiences. So today we are going to break free of those limitations using web workers and off-screen canvas.

These are two browser features that can help us keep our apps buttery smooth no matter the workload. Before we jump in, a quick introduction. I'm Sribam Gautam, a senior software engineer at Headout. For the past few years, I've worked on large-scale React apps from real-time booking flows to immersive UIs with thousands of daily active users. The techniques that I'll share come from real-world challenges I face when pushing the boundaries of what's possible in the browser.

So we all know this, JavaScript runs on a single thread. That means if something's blocking the thread, say a heavy calculation, the entire app freezes. Even simple operations like mapping or filtering large arrays can lock up the UI. In React, it's worse, because both your component logic and rendering happen on the same thread. So let's take a look at what that might mean for users. So you ever typed in a search box and noticed your keystrokes lagging behind? Or maybe tried scrolling through a massive table that just freezes for a moment?

Chances are the main thread is overloaded. In modern React apps, this can happen for all sorts of reasons. Maybe sorting massive tables, rendering complex visualizations, and maybe validating deeply nested forms. But what's frustrating here is that many teams just accept that this kind of junk is inevitable. But honestly, it's not. There's a better way. One that gives you smooth and responsive user experiences without actually compromising on the complexity or the features in your application. So let's dive in.

2. Optimizing User Responsiveness with Web Workers

Short description:

Tools for breaking free from JavaScript single-threading with web workers and offscreen canvas. Utilize web workers for background threads and offscreen canvas for rendering off the main thread. Delegate heavy computations strategically to optimize user responsiveness.

So what can we do about the single threading nature of JavaScript? Luckily, modern browsers they give us tools to break free from the main thread without actually breaking our applications. The two key features that we will be talking about today are web workers and offscreen canvas. Web workers let us spin background threads. Think of them like little JavaScript engines that are running in parallel, which are absolutely perfect for heavy computations. Then talking about offscreen canvas. Offscreen canvas goes a step further. It lets us move the rendering itself off the main thread. That means animations and charts, too. But here's the trick. It's not just about throwing everything into a worker. It's about strategic offloading to figure out what's slowing things down, what doesn't need access to DOM, and what can be safely done in the background.

Our goal here is to ensure that our main thread is laser-focused on user responsiveness, while background threads handle all the heavy lifting. Now, let's start with web workers. At its core, a web worker is JavaScript running in a separate thread. Here's how it might look like in a React component. You create a worker when the component mounts. You communicate with it using postMessage. You then listen for results by onMessage, and always clean up when the component unmounts. The communication here happens through a messaging system. We send data to the worker with postMessage, and we receive results through the onMessage event. This keeps our UI thread free, even when we are crunching big data.

So here's what the worker itself looks like. The first thing to note is that web workers run in a completely separate context. They don't have access to DOM or to React components. It listens for messages, runs your CPU heavy tasks, like transforming or filtering data, and then sends the results back with the postMessage. Think of it like a mini server that is running inside your browser, which is purpose-built for offloading all the heavy stuff. With this setup, your app becomes smarter about delegation. Heavy tasks go to the worker, while your UI stays focused on user interaction. Now, let's see how much of a difference web workers can make. In this demo, we are calculating prime numbers up to 100,000.

3. Enhancing Performance with Offscreen Canvas

Short description:

Smooth UI performance with web workers, limitations, and offscreen canvas for offloading graphics rendering. Ensure stable frame rates, prevent crashes, and optimize heavy tasks. Transfer canvas control to workers for real-time visualizations and animations, enhancing performance for interactive applications.

When the computation runs on the main thread, you'll notice the frame rate drops dramatically, from 120 FPS down to just 54 FPS. This decrease results in janky scrolling, lagging clicks, and a sluggish UI experience. However, when the same task is handled by web workers, the frame rate remains stable at 120 FPS, ensuring a smooth UI performance without lags or delays. Furthermore, the computation time is slightly improved with web workers, and most importantly, it doesn't disrupt the user experience.

Even when challenged with one million data points, the version utilizing web workers successfully completes the task without browser crashes or freezes, in sharp contrast to the main thread version. Understanding the limitations of web workers is crucial; they lack direct access to the DOM, React components, and some browser APIs. Operating in a separate memory space from the main application, web workers are intentionally isolated to prevent threading issues like race conditions. This separation necessitates thoughtful consideration of task delegation and communication structuring to maximize their benefits.

For tasks requiring graphics rendering off the main thread, offscreen canvas comes into play. By transferring control of a canvas element to a worker using the transfer control to offscreen method, offscreen canvas enables rendering without impacting the main thread's performance. This capability is ideal for real-time visualizations, simulations, heavy animations, and more. The worker implementation for offscreen canvas involves setting up an animation loop using requestAnimationFrame, ensuring smooth animations even when the main thread is busy. This advancement is particularly advantageous for dashboards, maps, games, and various interactive applications.

4. Efficient Task Handling with Offscreen Canvas

Short description:

Design separation for preventing threading issues, offload CPU-intensive tasks to workers without DOM access limitations. Offscreen canvas enables graphics rendering offloading to workers for real-time visualizations and animations. Implement animation loops with requestAnimationFrame for smooth rendering, and utilize promise-based communication for efficient and robust task handling.

This separation is actually by design. It actually prevents race condition and other threading issues that come along with multi-threaded programming. So, it means that we need to be thoughtful about what we offload to workers and how we structure our communication. But the good news here is that most CPU-intensive operations that would benefit from being moved off to a worker don't need DOM access, so this limitation becomes less of a blocker.

What if we want to offload graphics rendering? That's where offscreen canvas comes in. The code example demonstrates how you can transfer control of a canvas element to a worker. Using the transfer control to the offscreen method creates an offscreen canvas object that can be sent to a worker. This setup allows the worker to draw to the canvas without impacting the main thread, perfect for real-time visualizations, simulations, and heavy animations.

The worker implementation for offscreen canvas involves setting up an animation loop using requestAnimationFrame. This setup ensures that all drawing operations happen in the background, maintaining smooth animations even if the main thread is busy. This advancement is transformative for dashboards, maps, games, and various applications. Efficient communication with WebWorkers is crucial, especially for complex applications. Implementing a promise-based approach enables asynchronous task handling, improves code readability, includes error handling, and enhances overall application robustness.

5. Efficient Data Transfer and Memory Management

Short description:

Drawing operations in the background ensure smooth animations. Manage WebWorker communication effectively with promise-based approach for tasks. Optimize data transfer between threads using transferable objects and shared array buffer for efficient memory access.

But the difference here is that all the drawing operations happen in the background. So, even if the main thread is busy, your animations continue to run smoothly. This is really game-changing for dashboards, maps, games, and much more.

There's one challenge when working with WebWorker, which is managing communication effectively. The simple post message or message pattern works for basic scenarios, but in real applications, you'll often have multiple types of tasks and may need more sophisticated communication.

This example shows a promise-based approach to worker communication. Creating a unique ID for each task, setting up a one-time listener for the response, and wrapping the process in a promise. This pattern allows using async with worker tasks, enhancing code readability, error handling for robust applications, and catching errors in the main application code.

Let's discuss making data transfer between threads more efficient. By default, when passing data to a WebWorker using post message, the data is copied. For small payloads, this is okay, but with large datasets, copying becomes a performance bottleneck in terms of time and memory usage. Two advanced techniques solve this: transferable objects for zero-copy data transfer like array buffer, and shared array buffer for shared memory between threads, offering efficient memory access for large datasets.

6. Efficient Multithreading Techniques

Short description:

Enhance data transfer efficiency with transferable objects and shared array buffer. Utilize worker pools for efficient parallel task handling on multi-core processors. Apply multi-threading wisely based on communication overhead and task nature. Consider framework guidelines for task allocation and performance optimization.

Let's talk about how we can make our data transfer between threads more efficient. By default, when you pass data to a WebWorker using post message, that data is copied. For small payloads, that's still fine, but once you start dealing with large datasets, copying becomes a serious performance bottleneck. And it's both in terms of time and memory usage. Now, to solve this, we have two advanced techniques that drastically optimize this flow. First is transferable objects. Transferable objects allow for zero-copy transfer of objects like array buffer, useful for binary data, images, or large numeric arrays. Instead of copying data, ownership of memory is transferred to the worker. However, the main thread loses access to that memory once transferred; careful planning is required. Another technique is shared array buffer, enabling shared memory usage between threads for efficient memory access with large datasets. Shared memory access requires synchronization to avoid race conditions and ensure data consistency, offering significant performance benefits in data-intensive applications like real-time processing and graphics.

Worker pools are essential for running parallel tasks and utilizing multi-core processors efficiently. Creating a pool of workers, rather than individual workers for each task, optimizes resource utilization. The pool generates workers based on CPU cores, maintains a task queue, and assigns tasks to idle workers. This approach effectively distributes work across threads without the overhead of creating and destroying worker threads constantly. Knowing when to apply multi-threading techniques is crucial. Not all operations benefit from moving to a worker due to communication overhead. A framework guides decision-making: UI updates remain on the main thread, while data processing, complex calculations, animations, and visualizations benefit from off-screen canvas or worker utilization. Consider specific situations to determine whether complex tasks should be offloaded.

The use of advanced techniques like transferable objects and shared array buffer significantly enhances data transfer efficiency between threads. Worker pools optimize resource utilization and task distribution for multi-core processors. Effective application of multi-threading techniques involves understanding the communication overhead and suitable task allocation methods based on the nature of the operation. Consider the provided framework to determine the appropriate thread for specific tasks, balancing performance and resource utilization effectively.

7. Optimizing Multi-threading Implementation

Short description:

Decide when to use multi-threading techniques wisely; reserve workers for data processing and complex operations. Avoid over-threading and communication overhead by batching data transfer and terminating unnecessary workers. Profile your application, start with small tasks, and provide fallbacks for browser support to optimize multi-threaded patterns.

Now, let's step back and discuss when you should actually use these multi-threading techniques. Not every operation benefits from being moved to a worker, and there's always a communication overhead to consider. UI updates should always stay on the main thread while data processing and complex calculations are ideal candidates for workers. Animations and visualizations can also benefit greatly from off-screen canvas. The warning symbol in this table indicates the situations where the decision depends on the specific. For example, simple animations might be fine on the main thread, but complex ones should be offloaded to off-screen canvas or to a worker.

My rule of thumb is, if an operation takes more than 100 milliseconds or blocks user inputs, it's a candidate for a worker. Common pitfalls to avoid when implementing multi-threading in React applications include over-threading. Avoid moving operations to workers just because you can; there's overhead in setting up workers and communicating between threads. Be mindful of communication overhead by batching communication and minimizing data size transfer between threads. Always remember to terminate workers when no longer needed to prevent resource consumption and memory leaks. Implement proper error handling in workers to catch errors and report them back to the main thread.

Best practices for implementing multi-threaded patterns in web applications include profiling the application before optimizing. Use Chrome DevTools' performance tab to identify operations causing main thread blockage. Start with small, isolated tasks when implementing workers for easier debugging and impact measurement. Implement feature detection and fallbacks for browsers lacking support for certain features to ensure functional applications for all users.

8. Best Practices for Web Application Multi-threading

Short description:

Always terminate workers when not needed to avoid resource consumption and memory leaks. Profile, start with small tasks, implement feature detection, and monitor performance in production. Explore future front-end parallelism with shared array buffer, worker modules, scheduler API, and WebGPU for optimized user experiences.

Third, always remember to terminate your workers when they are no longer needed. Forgotten workers continue to consume resources and can lead to memory leaks. Finally, implement proper error handling in your workers. Worker errors can be silent if not properly caught and reported back to the main thread. So, that's one thing to consider as well.

Now, let's move on. Now, let me share some best practices of implementing multi-threaded patterns in web applications. First, always profile your application before optimizing. Use the performance tab in Chrome DevTools to identify which operations are actually causing main thread blockage. Once you have identified those, then focus your efforts there. Second, start with small, isolated tasks when implementing workers. This actually makes it easier to debug issues and measure the impact of your changes. Third, always implement feature detection and fallbacks for browsers that don't support certain features. So, this is to ensure that your application actually remains functional for all kinds of users. Finally, monitor your performance in production. This is to ensure that your optimizations are actually benefiting the real users. Sometimes, optimizations that might look good in development or might work well in development, they don't actually translate well to real-world improvements. So, monitoring performance in production is really important.

Now, let's talk about what's next in the future of front-end parallelism. First is shared array buffer plus Atomix, which is actually for unlocking true shared memory between the main threads and the worker threads. It gives us very fine-grained control over synchronization. Then, we have worker modules. Worker modules actually allow us to import and export inside workers itself. It actually makes the worker logic more modular, scalable, and stable. Then, we have scheduler API, which is still experimental, but it offers great control over priority queues, task priority queues. It actually gives us more fine-grained control over scheduling. Think of it like React's internal scheduler. Then, we have WebGPU. WebGPU gives our JavaScript applications direct access to the GPU for compute-intensive applications like maybe machine learning, rendering, and simulations. So, I'd be happy to answer any questions that you might have about implementing multi-threading in your React applications, whether you're dealing with performance issues in existing applications or designing new ones to handle complex workloads. These patterns can actually help you create smoother or more responsive user experiences. Feel free to reach out with follow-up questions or to share your own experiences with these techniques. And with this, let's wrap up. Thank you so much!

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Scaling Up with Remix and Micro Frontends
Remix Conf Europe 2022Remix Conf Europe 2022
23 min
Scaling Up with Remix and Micro Frontends
Top Content
This talk discusses the usage of Microfrontends in Remix and introduces the Tiny Frontend library. Kazoo, a used car buying platform, follows a domain-driven design approach and encountered issues with granular slicing. Tiny Frontend aims to solve the slicing problem and promotes type safety and compatibility of shared dependencies. The speaker demonstrates how Tiny Frontend works with server-side rendering and how Remix can consume and update components without redeploying the app. The talk also explores the usage of micro frontends and the future support for Webpack Module Federation in Remix.
Understanding React’s Fiber Architecture
React Advanced 2022React Advanced 2022
29 min
Understanding React’s Fiber Architecture
Top Content
This Talk explores React's internal jargon, specifically fiber, which is an internal unit of work for rendering and committing. Fibers facilitate efficient updates to elements and play a crucial role in the reconciliation process. The work loop, complete work, and commit phase are essential steps in the rendering process. Understanding React's internals can help with optimizing code and pull request reviews. React 18 introduces the work loop sync and async functions for concurrent features and prioritization. Fiber brings benefits like async rendering and the ability to discard work-in-progress trees, improving user experience.
Full Stack Components
Remix Conf Europe 2022Remix Conf Europe 2022
37 min
Full Stack Components
Top Content
RemixConf EU discussed full stack components and their benefits, such as marrying the backend and UI in the same file. The talk demonstrated the implementation of a combo box with search functionality using Remix and the Downshift library. It also highlighted the ease of creating resource routes in Remix and the importance of code organization and maintainability in full stack components. The speaker expressed gratitude towards the audience and discussed the future of Remix, including its acquisition by Shopify and the potential for collaboration with Hydrogen.
Thinking Like an Architect
Node Congress 2025Node Congress 2025
31 min
Thinking Like an Architect
Top Content
In modern software development, architecture is more than just selecting the right tech stack; it involves decision-making, trade-offs, and considering the context of the business and organization. Understanding the problem space and focusing on users' needs are essential. Architectural flexibility is key, adapting the level of granularity and choosing between different approaches. Holistic thinking, long-term vision, and domain understanding are crucial for making better decisions. Effective communication, inclusion, and documentation are core skills for architects. Democratizing communication, prioritizing value, and embracing adaptive architectures are key to success.
The Eternal Sunshine of the Zero Build Pipeline
React Finland 2021React Finland 2021
36 min
The Eternal Sunshine of the Zero Build Pipeline
For many years, we have migrated all our devtools to Node.js for the sake of simplicity: a common language (JS/TS), a large ecosystem (NPM), and a powerful engine. In the meantime, we moved a lot of computation tasks to the client-side thanks to PWA and JavaScript Hegemony.
So we made Webapps for years, developing with awesome reactive frameworks and bundling a lot of dependencies. We progressively moved from our simplicity to complex apps toolchains. We've become the new Java-like ecosystem. It sucks.
It's 2021, we've got a lot of new technologies to sustain our Users eXperience. It's time to have a break and rethink our tools rather than going faster and faster in the same direction. It's time to redesign the Developer eXperience. It's time for a bundle-free dev environment. It's time to embrace a new frontend building philosophy, still with our lovely JavaScript.
Introducing Snowpack, Vite, Astro, and other Bare Modules tools concepts!
Composition vs Configuration: How to Build Flexible, Resilient and Future-proof Components
React Summit 2022React Summit 2022
17 min
Composition vs Configuration: How to Build Flexible, Resilient and Future-proof Components
Top Content
Today's Talk discusses building flexible, resilient, and future-proof React components using composition and configuration approaches. The composition approach allows for flexibility without excessive conditional logic by using multiple components and passing props. The context API can be used for variant styling, allowing for appropriate styling and class specification. Adding variants and icons is made easy by consuming the variant context. The composition and configuration approaches can be combined for the best of both worlds.

Workshops on related topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
High-performance Next.js
React Summit 2022React Summit 2022
50 min
High-performance Next.js
Workshop
Michele Riva
Michele Riva
Next.js is a compelling framework that makes many tasks effortless by providing many out-of-the-box solutions. But as soon as our app needs to scale, it is essential to maintain high performance without compromising maintenance and server costs. In this workshop, we will see how to analyze Next.js performances, resources usage, how to scale it, and how to make the right decisions while writing the application architecture.