React Compiler Internals

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

React Compiler was introduced back in 2024 as a way to automatically handle performance optimizations in your apps—but how does it work? 

In this talk, I’ll walk you through some of React Compiler’s internals. We’ll explore its full process, from parsing to Abstract Syntax Tree (AST) transformation, along with its key technologies, benefits, trade-offs, and a practical example of how it transforms your code to prevent unnecessary re-renders.

The good news: you (luckily) don’t need to master its internals to integrate it into your codebase. That said, a solid grasp of its internals will significantly enhance your ability to apply the optimization techniques it unlocks!

This talk has been presented at React Summit 2025, check out the latest edition of this React Conference.

FAQ

React Compiler optimizes React code by automatically adding memoization, which prevents unnecessary re-renders of components when their inputs haven't changed, thus improving performance.

React Compiler solves the problem of manually adding memoization (useMemo, useCallback, React.memo) by automatically identifying expensive computations and caching them, reducing the risk of errors and improving performance.

Yes, React Compiler will not run if your code breaks the rules of React. It requires the code to follow React's rules to effectively add memoization and optimize performance.

The first stable version of React Compiler is currently in release candidate stage, preparing for its stable release. However, no specific date has been provided for the stable version release.

You can check your codebase's readiness for React Compiler by running a specific command that analyzes the project, identifying components that break React rules or show potential compatibility issues.

Yes, React Compiler can be used with frameworks like Next.js and Expo by installing the Babel plugin and adding it to your app.

No, React Compiler cannot optimize legacy components like class components or code that breaks React's rules. It focuses on optimizing function components.

Developers can keep existing useMemo calls in their React code. React Compiler is smart enough to determine if manual useMemo calls should be left in place or replaced with its own optimizations.

React Compiler is a build-time tool introduced by the Meta team that automatically adds memoization to React components and enforces React rules, allowing developers to write clean and simple React code.

React Compiler's optimization process involves several phases, including high-level intermediate representation (HIR), single static assignment (SSA), type inference, effect analysis, reactive analysis, scope discovery, and code generation.

Lydia Hallie
Lydia Hallie
23 min
13 Jun, 2025

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Introduction to React Compiler, its benefits, and the problem of memoization in React. Performance issues due to stateful components in React, solution with React.memo and use memo, and the benefits of React compiler in automating memoization. Compiler's transformation into high-level intermediate representation provides a clearer understanding of code operations and data flow, addressing the issues with unique identifiers for variables in complex scenarios. Compiler ensures each variable is assigned exactly once through single static assignment, resolving issues with variable values based on code paths and introducing unique names for assignments. Effects describe how operations interact with data, ensuring safe caching with types like read, store, capture, mutate, and freeze effects. The compiler understands operations but needs to identify values that change between renders for React, leading to the reactive analysis phase to determine reactive values in the component. Variables marked as reactive for potential changes between renders are optimized for caching. Compiler identifies dependencies to group operations for efficient caching. Scopes are established to cache related variables together and ensure efficient rendering and performance improvement in the final JavaScript code. Compiler sets up cache slots for dependencies and outputs to optimize performance. Scopes work independently to recalculate based on changes, ensuring efficient memoization. React Compiler streamlines memoization, providing automatic correct optimizations without manual burden, paving the way for performance-focused code.
Available in Español: React Compiler Internals

1. React Compiler and Memoization

Short description:

Introduction to React Compiler, its benefits, and the problem of memoization in React.

Oh, good morning, everyone. I hope you had some really strong coffee this morning because I'll be starting today with the internals of React Compiler. Last year at React Conf, the meta team introduced React Compiler, which is a build time tool that automatically adds memoization to your components and also enforces the rules of React. But the auto-memoization is personally what excites me most because this means that we finally don't have to manually add use memo, use callback, or React.memo. Instead we can write clean and simple React code like this one and React Compiler automatically transforms it into highly optimized JavaScript code with caching logic that would be pretty much impossible to achieve manually.

And the good thing is that we never even have to see or maintain this code. All we have to do is install compiler, configure it with our build pipeline, but what exactly happens behind the scenes to turn our simple React code into this more optimized code? Well, there are many, many phases that our code goes through in the pipeline. You can also see all these phases in the React Compiler playgrounds if you've ever worked with that. Way too many to mention in 25 minutes, so in this talk I just want to go through some of them, the most important ones.

But before I get into compiler, I first want to focus on the problem that it's solving because why do we even need memoization in React? Well, let's take this code, for example. So we have an array with some users, we have a user stats component that filters out the users to find what's active, and it shows the counts. And then we also have a main app component that shows the active user count and a way to sort the users in the list. Very clearly demo code, I know this is not very production, but it gets the job done for today.

2. React Performance Optimization

Short description:

Performance issues due to stateful components in React, solution with React.memo and use memo, and the benefits of React compiler in automating memoization.

Now, there is a pretty big performance issue in this code because our app component is stateful. We have a sort state that we can change from ascending to descending. But whenever we update the state, for example, in user sort, the app component re-renders. And by default, as we know, in React, when a component re-renders, so do the child components. So this means that whenever we change sort, user stats also gets re-rendered, and this function component gets re-executed. But within user stats, we're filtering the users array. Now, this is not a big deal if we just have these three users, but if we had 10,000 users, this has major performance implications.

So before anything can render, it first has to filter through all these users. But the issue is that the users array hasn't even changed. It's just re-rendering because sort changed. So it's completely unnecessary that this re-rendered. So to avoid this, or to avoid this re-rendering, we could wrap this component in a React.memo. And this just tells React that we only want to re-render this component if the values of the props have changed. So in this case, the users array. Now we could also be a bit more specific and say that we only want to avoid the filter function from re-running on every render, and then just wrap it in a use memo instead.

And in that case, we only re-run this filter function whenever users changed. But this approach does have some downsides. Because now let's say that we also want to filter the users based on their department. Well now we cannot forget to also add the department value to this dependency array. If we forget to add this, the value of active users will not update whenever department changes. And this is a mistake that I still make pretty frequently, and this is like the biggest downside to use memo. So this is exactly what React compiler solves.

3. Compiler Optimization Process

Short description:

Compiler's transformation into high-level intermediate representation provides a clearer understanding of code operations and data flow, addressing the issues with unique identifiers for variables in complex scenarios.

Instead of having to manually identify these expensive computations, wrap them in a use memo, and having to remember this dependency array, the compiler automatically takes care of memoizing this for you. But how is it able to do this? Well, you already saw all these phases, but I just want to talk about some of the most important ones. Starting with the high-level intermediate representation. So the very first thing that the compiler does is transforming the code into what's called a high-level intermediate representation, or HIR for short. And whereas we write our function component as just a single block of code, the compiler needs to see each and every operation separately to really understand what's going on in your code and also what parts can be optimized. So you can see each of these steps represents as individual instructions with a unique ID, a value where they get stored, and also the actual operation.

So when we look at our function component, the compiler sees that we first destructure the user parameter, we then load this user's variable for memory, we access the filter method, create the filter callback function, then within the callback function, we load the parameter. You get the point. It sees it on a very granular, atomic level. So now the compiler has a much clearer picture of every single operation, but especially how the data flows within the code. For example, we can now see that instruction 13 returns variable 19, which contains the JSX created in instruction 12, and this JSX element, in turn, combines the text from variable 17 and the dynamic count value. So these are all connected.

And in the same way, instruction 9 loads the value from variable 14, which in turn uses the value stored at 13 to access the link. So by transforming our JavaScript into this HIR, this high-level intermediate representation, the compiler now has a much clearer picture of every single operation that happens and how the data flows. But there is still one issue with the way it's currently written. You might notice that each variable has a unique identifier. You know, we have users 1, active users 11, count 15, and so on. And this works well in the simple example that I just used, but now if we take a look at a more complex example, let's say that we now are also passing a filters prop that has a limit results property on it, and we then conditionally reassign active users based on whether the user wants to limit the results.

4. Single Static Assignment in Compiler

Short description:

Compiler ensures each variable is assigned exactly once through single static assignment, resolving issues with variable values based on code paths and introducing unique names for assignments.

Now the HIR looks like this, which is a little bit more intimidating, but all that matters here is that we're sorting the active user or storing the active users variable here, we load the filters prop, we access the limit results property on filters, and now finally we get to the if statement, and this is a conditional. Now on this line, it essentially says that if filters.limit results is true, then execute block 4, which is the block with a slice method. Otherwise just go straight to block 3. Now the issue here is that the active user or is the active users variable. So you can see here that we're using the same active users variable, active users 12, multiple times in this code. But the problem is that the same variable now refers to different values depending on which code path we took. If we came directly from the first block, so when limit results is false, then active users just contains the filtered array. But if we came from the third block, when limit results is true, it now contains the filtered and sliced array.

So the compiler now cannot know at build time what the user might select. It could be true, it could be false. So with the current HIR, the compiler cannot safely optimize the value of active users. So this is where the next phase, single static assignment, comes in, or SSA. Single static assignment is a form within the HIR that ensures that each variable is assigned exactly once. So now you can see that each assignment gets a unique variable name. We have active users 42 for the filter method and active users 52 for the slice operation. But notice that both of them lead to the third block. And the third block is where these two code paths kind of come together. So now if we came from the second block, or we have a function here that says if we came from the second block then use active users 52, but if we came from the third block, if limit results is true, then use 42, which is the filtered and the sliced one.

Again, in our original code snippet, this was not an issue. We didn't have any variable reassignments that could create this kind of ambiguous variable statement. So in that case, the SSA transformation still happens, but it's more straightforward. Each variable still gets a unique name. But much simpler in this case. I just wanted to show it because throughout the rest of the demo, you'll see the other variable names, so I just wanted to let you know that this happened. So we now have our data flow pretty clearly mapped out, but the compiler still needs to understand two critical things. The types of data that we're working with, and also what each operation actually does to that data. So first we have type inference, and this is pretty easy. Here the compiler figures out whether the values are primitives, objects, functions, React elements, and so on. But now it also needs to determine how these operations interact with the data. So now it runs what's called effect analysis, and this examines how each operation interacts with the data.

5. Compiler Effect Analysis and Reactive Detection

Short description:

Effects describe how operations interact with data, ensuring safe caching with types like read, store, capture, mutate, and freeze effects. The compiler understands operations but needs to identify values that change between renders for React, leading to the reactive analysis phase to determine reactive values in the component.

And an effect just describes how an operation interacts with the data, whether it reads it, changes it, modifies it, and so on. And this is really important later on for memorization, because a compiler needs to know which operations are safe to cache, and which could break if they were cached incorrectly. So there are six types of effects. So in our case, we first have some read effects, like accessing a method like filter or length. So a read effect doesn't change anything about the code. So any read effect is perfectly safe for caching. We also have some store effects, and these are store effects where we assign new values. So these are safe to cache, as long as the value that's being stored does not depend on any mutable data. Now we also have capture events, and these are events for loading values from outer scope. So function parameters or previously stored variables. And these create dependencies that the compiler needs to track to have the correct memorization. We also have mutate effects. These are operations that might modify depending on what happens at run time, and that's the majority of it. Lastly, we also have freeze effects, and these make values immutable so they guarantee that they cannot be modified later. And the compiler just adds these to make sure that the data won't change, making them safe to cache.

Okay. So finally at this point, the compiler now understands what each operation does and how it affects the data. But it still doesn't know which values change between renders. So for example, we know that instruction three has a read effect when accessing filter, but that doesn't tell us if the user's array can change between renders. Maybe it's always the same static array, maybe it comes from props or state and so on. So the effect analysis tells us the potential for change, but it doesn't tell us whether it will actually change or the change that actually matters in React, if it can change between re-renders. So now it's time for the next phase, reactive analysis. So during this phase, the compiler has to figure out which specific values in this component are reactive, meaning that they can change between renders. Just to make the code a bit more readable, I'll just remove the types, because otherwise I have to make the font really small, but they're still there. So the compiler first starts by looking at the source of each value, and to check whether it's reactive, it begins with the values that are definitely reactive, which are the function parameters are always reactive, because they come from props, they can always change between renders, and also hook calls produce reactive values, like use state, use context, but in this case we only have function parameters. So now the function parameter is marked as reactive, but it then traces forward through the data flow with a simple rule. If a reactive value flows into an operation, this output becomes reactive too. And what this means is, looking at our example, the user's parameter starts as reactive, because it's a function parameter, but we now have a reactive value, 23 becomes reactive as well, and then we load users on line two, so 24 becomes reactive as well. Then we access the filter property on 24, so this also gets reactive, because we loaded the reactive user prop, so 25 also becomes reactive, and so on. So anytime there's any reactive input, the output becomes reactive as well.

6. Compiler Scope Discovery and Code Generation

Short description:

Variables marked as reactive for potential changes between renders are optimized for caching. Compiler identifies dependencies to group operations for efficient caching. Scopes are established to cache related variables together and ensure efficient rendering and performance improvement in the final JavaScript code.

So just to zoom out a little bit, what are we really looking at here? Well, really what we're saying here is that for the initial property structuring, we create a new binding, but this is of course reactive, this prop can change between renders, and that's what this current HDR is saying. Then we load users 22 into this new temporary variable 24, this is also reactive, because we're reading the reactive users 22 variable, so the value of this specific atomic thing in our function can change between renders. Now, accessing the filter method here can also change between renders, because if we're working with a brand-new users array, and arrays are objects, meaning an entirely new object, this filter method would be on a brand-new object, so this can also change between renders. So again, this phase ensures that every value that could change between renders gets marked as reactive. And these reactive values are exactly what the compiler wants to optimise. There's no point in memorising something that never changes, but values that can change between renders are perfect for caching. So now the compiler knows both the behaviour of the operations from the effects, and also which values actually matter for reactive rendering for reactive analysis.

At this point, we've got our code broken down into an HDR, types, effects, and so on, and we also know what values can change between renders. Now the compiler has to decide which operations should be cached together, and which ones should just run every time. Variables that change together must also be cached together. So now it's time for the next phase which is scope discovery. So now the compiler has to decide which operations should be cached together, but the idea is that variables that change together must be cached together, and variables that change independently should be cached independently. So during this phase, the compiler groups the operation based on their dependencies, so operations that depend on the same inputs get grouped into the same scope. So after scope discovery, our HDR now looks something like this where we have three scopes. First, we have scope zero which is just the callback functions because functions get their own scope because they can be memorised independently because it doesn't depend on any reactive values, so this can be cached once and then reused across all renders pretty much. Now scope one covers instructions three, five, and six, and these all depend on the user input for this filtering operations. They all need to recalculate whenever the user's array changes, so these all get grouped.

Zooming out, this entire filtering operation now becomes its own memorisable scope. This is almost like we just had with useMemo, but now it's done automatically by the compiler. And the filters callback also gets extracted entirely because this doesn't depend on anything and this could just be reused across all renders. We even get a separate scope for JSX creation, so this only recalculates whenever count changes. So if the user's array changes but the total amount of users is the same, then we can just reuse this cached JSX, and this is something that is pretty much impossible to do with useMemo but it's done automatically with compiler. After getting its own scope, the callback function now becomes a separate function. So the logic is still the exact same but it's just its own block, and this is great because we don't have to create a new function object now on every render. So the compiler automatically identifies these like pure functions and extracts them which is much better performance which we'll see in the end result as well. So at this point the scopes just kind of exist as annotations in our HDR but they need to become actual control flow sources for the code generation. So the compiler now structures the scope into a different presentation that clearly shows which instructions belong to each scope and also what their dependencies and outputs are. So now you can clearly see that scope one depends on the user's variable and produces the filter result while scope two depends on count and produces this JSX. So this explicit control flow is now very important for the next phase, finally we get to code generation. So this converts our HDR into the optimized JavaScript that we saw from the beginning.

7. Compiler Memoization Optimization

Short description:

Compiler sets up cache slots for dependencies and outputs to optimize performance. Scopes work independently to recalculate based on changes, ensuring efficient memoization. React Compiler streamlines memoization, providing automatic correct optimizations without manual burden, paving the way for performance-focused code.

To do this, the compiler first walks through each scope and sets up cache slots, so we need two types of slots here, one slot for each dependency so we can detect the changes and also a slot for each output value so we can actually cache the result. So in total we get four slots now because we have dependency users with the output and then we also have the dependencies count with the output JSX. Now this automatically generates a use memo cache call but usually in your React compiler code you'll just see this as a lower, whatever that's called, low-C because that's just shorter for the bundling purposes, I guess. So then afterwards it generates the rest of the code, which now includes our destructured user's parameter, we then have our first scope, and we can see here that it first checks if the users have changed since the last time by comparing it to what's stored in our cache slot zero. If yes, then of course run this expensive filter operation and store both the new user's reference and the result T1 in the cache and if not just use the previous result from cache slot one. We then have these unscoped values that just run every time, like assigning the filter result to active users and calculating the counts, and finally we also have the second cache block for the JSX creation. If the value of count is different from last time, it creates a new JSX and updates the cache and otherwise just reuses this cache JSX. Eventually we return T2 and that is the JSX with the count variable. The nice part here is that all these scopes work independently, so if the users array has changed, the first scope recalculates, but if the length of the total amount of users hasn't changed, then there's no need to generate new JSX, we can just get the stored one. Getting this specific with use memo would mean that we would have to create tiny wrapper pieces everywhere in our code, and you could technically do it but it would just be a maintenance nightmare, but the compiler does this all automatically.

Finally we've just completed all phases, from HIR, SSA, type inference, effect analysis, scope discovery to code generation. It's important to understand that React Compiler is not a silver bullet. Compiler cannot optimize code that breaks the rules of React, like conditionally calling hooks, components with side effects that cannot be safely cached, legacy stuff like class components and some more complex JavaScript patterns that the compiler just cannot figure out safely. The compiler will always try to not memoize anything if it's not sure. Better safe than sorry. If your app is already well optimized with a lot of manual memoizations, you shouldn't expect a big performance improvement because the real win here is the automatic correct optimizations without the maintenance burden. It doesn't necessarily mean that it's always better performance than manually using use memo. If you're interested in checking out how ready your existing codebase is for React Compiler, you can just run this command, which analyzes your project and identifies components that break the rules of React or shows you potential compatibility issues. As of today, the compiler has moved to the release candidate and is preparing for its stable release, but you can already try it today in frameworks like Expo or Next.js by just installing the Babel plugin and adding it to your app. If you've enabled it in your app, you can see it working in React DevTools with all these sparkly memos, these are automatically memoized by React Compiler. So with React Compiler, we no longer have to choose between writing clean code and also having performance code. So the compiler automatically handles all of this memoization for us. And this is pretty much the future of React, it's just performance by default but without the complexity, and this is exactly what compiler enables.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React Compiler - Understanding Idiomatic React (React Forget)
React Advanced 2023React Advanced 2023
33 min
React Compiler - Understanding Idiomatic React (React Forget)
Top Content
Watch video: React Compiler - Understanding Idiomatic React (React Forget)
Joe Savona
Mofei Zhang
2 authors
The Talk discusses React Forget, a compiler built at Meta that aims to optimize client-side React development. It explores the use of memoization to improve performance and the vision of Forget to automatically determine dependencies at build time. Forget is named with an F-word pun and has the potential to optimize server builds and enable dead code elimination. The team plans to make Forget open-source and is focused on ensuring its quality before release.
If You Were a React Compiler
React Summit US 2024React Summit US 2024
26 min
If You Were a React Compiler
Top Content
In this talk, the speaker aims to build an accurate understanding of how the new React compiler works, focusing on minimizing re-renders and improving performance. They discuss the concept of memoization and how it can be used to optimize React applications by storing the results of function calls. The React compiler automates this process by analyzing code, checking dependencies, and transpiling JSX. The speaker emphasizes the importance of being aware of memory concerns when using memoization and explains how the React compiler detects changes in function closure values. They also mention the Fibre Tree, which drives the reconciliation process and helps optimize performance in React. Additionally, the speaker touches on JSX transpilation, compiler caching, and the generation of code. They encourage developers to understand the code generated by the compiler to optimize specific sections as needed.
Everything You Need to Know About React 19
React Summit US 2024React Summit US 2024
29 min
Everything You Need to Know About React 19
Watch video: Everything You Need to Know About React 19
React 19 introduces new features such as React Compiler and React Actions, which optimize code and provide better performance. The useOptimistic hook allows for optimistically updating UI, while the UseFormStatus hook tracks loading states and enables button disabling. The introduction of the 'action' attribute simplifies form handling and data retrieval. React 19 eliminates the need for useMemo and useCallback thanks to the React Compiler. The stability of React 19 has been observed in side projects without major issues.
What Refs Can Do for You
React Summit US 2024React Summit US 2024
27 min
What Refs Can Do for You
Today's Talk focused on using refs and profiling Agigrid in React. The speaker shared their experience with optimizing custom cell components and performance, including using memo and leveraging the React compiler. They also discussed improving performance with manual style updates and refactoring the use of useEffect. The speaker highlighted the use of ref callbacks, which can be implemented with useLayoutEffect. React 19 introduces changes to the ref callback approach. The Talk touched on using React DevTools and CSS variables for monitoring renders. It also discussed the compatibility of Azure Grid with React and the trade-offs between using React components and vanilla JavaScript. The speaker emphasized the importance of considering the DX improvements and the complexity of not seeing a React component tree in the dev tools. The Talk concluded with a mention of AG Grid features, handling refs at various levels, and the recommendation to consult with Stephen for technical questions and application architecture.
React 19 and the Compiler for the Rest of Us
React Day Berlin 2024React Day Berlin 2024
30 min
React 19 and the Compiler for the Rest of Us
Hi everyone. I'm Johnny, an application engineer who builds user-centric React 19 applications. Today, our goal is to help you reach production mountain with React 19. Let's start by gathering a crew and discussing the motives for installing the compiler. We'll revisit React rules and explore the compiler's impact on code. The React compiler translates from JavaScript to JavaScript and provides error reporting. It enables deeper levels of optimization and focuses on user experience. To upgrade to React 19, install the latest version and be aware of any compatibility issues. Check if any custom runtime code needs to be disabled. The React compiler can be used with versions 17 or 18 if you have runtime support. The compiler removes use memos and optimizes the initialization process based on static components. It provides granular reactivity and reduces rendering, making the application feel quicker. Follow React rules and conventions to ensure compatibility. Test custom hooks, be aware of the impact on build time, and address any unexpected issues like the removal of the global JSX namespace. Debugging tools and source mapping in Chrome are useful for understanding compiler output. Enjoy translating chants and exploring the possibilities of React 19!
An App Developer's Guide to React 19: What You Need to Know and Everything You Can Safely Ignore
React Summit US 2024React Summit US 2024
33 min
An App Developer's Guide to React 19: What You Need to Know and Everything You Can Safely Ignore
Watch video: An App Developer's Guide to React 19: What You Need to Know and Everything You Can Safely Ignore
Today's Talk focused on React 19 and its features, APIs, changes, and optimizations. The speaker emphasized the importance of migrating apps and building with React 19. They discussed the high-level features of React 19, including TypeScript emphasis and the testing library philosophy. The Talk also covered the APIs and integration of React 19, as well as the changes and type safety it brings. The speaker highlighted the improvements in useReducer types and the use of TypeScript. They introduced useActionState for migrating code and the useOptimistic hook for maintaining state immediacy. Real-time updates, action functions outside components, and the benefits of using the 'use' prefix in React were also discussed. The Talk touched on upgrade considerations, the role of RSEs and server actions in React, and the current state of RSC development. Overall, the Talk provided valuable insights into the new features and enhancements in React 19 and their impact on the development process.

Workshops on related topic

Mastering React Server Components and Server Actions in React 19
React Advanced 2024React Advanced 2024
160 min
Mastering React Server Components and Server Actions in React 19
Workshop
Maurice de Beijer
Maurice de Beijer
Calling all React developers! Join us for an immersive 4-hour workshop diving deep into React Server Components and Server Actions. Discover how these game-changing technologies are revolutionizing web development and learn how to harness their full potential to build lightning-fast, efficient applications.
Explore the world of React Server Components, seamlessly blending server-side rendering with client-side interactivity for unmatched performance and user experience. Dive into React Server Actions to see how they combine client-side interactivity with server-side logic, making it easier to develop interactive applications without traditional API constraints.
Get hands-on experience with practical exercises, real-world examples, and expert guidance on implementing these technologies into your projects. Learn essential topics such as the differences between Server and Client Components, optimizing data fetching, passing data effectively, and maximizing performance with new React hooks like useActionState, useFormStatus and useOptimistic.
Whether you're new to React or a seasoned pro, this workshop will equip you with the knowledge and tools to elevate your web development skills. Stay ahead of the curve and master the cutting-edge technology of React 19. Don't miss out - sign up now and unleash the full power of React!
Evolution of Form Management in React
React Summit US 2024React Summit US 2024
72 min
Evolution of Form Management in React
Workshop
Adrian Hajdin
Adrian Hajdin
Learn how to handle forms in React using the latest features, such as startTransition, useTransition, useOptimistic, and useActionState, with and without React 19 server actions, alongside proper validation, error handling, and best practices.The workshop will begin by demonstrating traditional form handling using useState and useEffect for client-side rendering. Gradually, we'll transition to using the latest React 19 features, including server-side forms and the newest hooks for managing form states and errors. By the end of the workshop, participants will understand how to create robust forms with proper validation and error handling.Learning GoalsLatest React 19 Hooks — useTransition, useFormStatus, useOptimistic, useActionState, useDeferredValueServer ActionsRevalidationsServer-side ValidationError handlingSecurity practices