Module Federation: Divide, Conquer, Share!

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Reusing front-end features built by different teams can be challenging depending on the chosen approach. If we use a library, what about versioning and maintenance? If we use a monolith, what about build times? We can find a pragmatic solution in Module Federation, not only for sharing features but also for dividing and conquering. Join me in this talk to discuss the latest updates of Module Federation, type safety, analyze its challenges, and check a real-world case where we applied this solution to share providers, hooks, features, and interesting issues we faced on the way.

This talk has been presented at JSNation US 2024, check out the latest edition of this JavaScript Conference.

FAQ

Micro frontends are an architectural approach where a web application is broken down into smaller, independent, and self-contained frontend components that can be developed, deployed, and maintained by separate teams.

Micro frontends can be beneficial when your team or application has grown, requiring separate teams for maintenance. They allow for independently deployable components, improving maintainability and scalability.

Module federation is a technique introduced in Webpack 5, allowing parts of a web application, like components or functions, to be shared across applications at runtime without needing to redeploy the host application.

Build-time techniques require redeployment for changes, ensuring version consistency but adding complexity. Runtime techniques, like module federation, allow immediate updates without redeployment but can introduce runtime errors if backward compatibility isn't maintained.

Module Federation 2.0 includes a TypeScript plugin that automatically extracts and shares type definitions from remote modules, ensuring type safety and enabling features like auto-complete in the host application.

The strangle pattern involves incrementally updating or replacing parts of an application by integrating new components using module federation, allowing gradual migration without disrupting the entire system.

To ensure high availability, serve assets from reliable sources like CDNs or industry-standard storage solutions, and implement error boundaries to handle runtime failures gracefully.

Tools like Medusa help visualize micro frontend dependencies, and the Chrome Model Federation plugin can assist in debugging by showing how modules are connected and consumed.

Use a shared UI kit to ensure consistency in styling across different micro frontend components, avoiding divergent styles like using different fonts on different pages.

State management should aim for decoupling. Use techniques like hooks, prop drilling, or local storage to manage state independently, or consider creating a federated module specifically for state management.

Nataly Rocha
Nataly Rocha
20 min
21 Nov, 2024

Comments

Sign in or register to post your comment.
  • Va Da
    Va Da
    P4
    Thanks, federated modules is interesting topic to explore.
Video Summary and Transcription
Using front-end features can be challenging. Model federation is an approach to implement micro frontends. Micro frontends can be built-in or use runtimes. Two micro frontends need to update together to ensure consistency. Other techniques for micro frontends include web components, system.js, iframes, and monorepos. Model federation in Webpack 5 decouples runtime from build tools, allowing flexibility in model loading. High availability and error boundaries are important for Model Federation. Full-federated sites enable fast development cycles and easy testing. Importing components from a federated model requires dynamic imports, dependency sharing, and performance considerations. Debugging tools like Medusa and the Model Federation plugin for Chrome help identify and resolve issues. Consistent styling can be achieved by using the same UI kit. Routing can be handled by a shell application or frameworks like Single SPA. Decoupling components using specific federated models and considering tradeoffs when designing application growth strategies. Determine if Microfrontends are necessary based on the need for independent modular components in a growing product.

1. Introduction to Micro Frontends

Short description:

Using front-end features can be challenging. Model federation is an approach to implement micro frontends. Micro frontends can be built-in or use runtimes. Built-in requires redeployment while runtimes provide immediate changes. Both options have trade-offs.

We're using front-end features built by different teams sometimes can become challenging depending on the chosen approach. For example, if we use a library, what about versioning and maintenance? If we use a monolith, what about the pipelines and the build times? We can find a pragmatic solution with the model federation for 10 times not only for sharing the features but also for dividing and conquering.

So let's go. Hello there. My name is Nathalie Rochan. I'm a tech lead at Static Builders. I am really glad to be part of JS Nation US and basically have the chance to share some experiences and concepts with you. So, let's start first with micro frontends. And model federation is just one approach of many to implement micro frontends. But why in the first place you would consider implementing micro frontends, right? It could be for many reasons. For example, your team has grown, your application has grown in a way that maintenance requires separate teams or just it makes sense to have our project divided in different parts so that it can be more maintainable. And it is also self-contained and independently deployable, which is one of the main features or the key feature of a micro frontend. Right.

So we can divide the micro frontend techniques, implementation techniques, in two types. The first one would be built-in and the other one would be runtimes. Right. So built-in means that micro frontends are kept in separate models, but they are still available at compile time. So they must be redeployed for a change to be effective. One example of this can be NPM packages or libraries where you actually have to bump the version and redeploy your application so that the change is affected. Right. On the other hand, it means that these are consumed at runtime, as the name says. When you deploy something, for example, using model federation, which is a way to achieve runtime dependencies, ECMAScript or System.js are another. The host or consumer applications don't need to bump the version or be redeployed to have this change. Once a producer application makes the change, immediately you can see a life change in the host applications because this is consumed at runtime in the browser. OK. So here we start to see that both options have their trade-offs. Right. More like trade-offs, things that you might want to consider when choosing them. So for example, on build-time, let's say that you have four micro frontends. Right.

2. Model Federation and Micro Frontends

Short description:

Two micro frontends need to update together to ensure consistency. Model federation in micro frontends enables runtime updates for all teams. However, if break-in changes occur or applications are tightly coupled, issues may arise. Other techniques for micro frontends include web components, system.js, iframes, and monorepos. Model federation in Webpack 5 decouples runtime from build tools, allowing flexibility in model loading.

And one of them released a new version. Depending on how these interact, let's say that two micro frontends need to update. So how would you make sure that both are using the same version? Let's say that one team actually got the Slack message saying, hey, we bumped the version or an alert or anything. They remembered it, but the other team didn't. So you could end up with different versions for the user, which kind of diminishes the experience of one application only. But you can enforce this using different tools. But we start adding complexity to enforce this, as you can see. Instead, runtime, for example, with micro frontends, with model federation, for example, you solve the issue because everyone will have the same version as soon as the other team deploys. Right. But what happens if they introduce a break-in change? What happens if some applications were tightly coupled? Right. So the application could break. In this case, we might want to ensure that we have total decoupling or that we have backward compatibility, or we might end up in the adding version again. Right. So as you can see, both have their tradeoffs. Here are also some other techniques that you can do, you can use for micro frontends, like web components, system.js, even iframes, monorepos with NX frameworks like single spot or bit. But we are going to be focusing on model federation. So let's see how it works.

Model federation was introduced in Webpack 5 and this year the 2.0 version was released. The way it works is by enabling the plugin in the build tool. In this case, we are using Webpack. But in the 2.0 version, it decouples the runtime from the build tools, increasing the flexibility of model loading. So as you can see here, we define which are our remote dependencies and the elements that we want to expose. We can expose anything from a component to a function provider. For our use case, we use something called or something that we can call the strangle pattern. So we can use models from an application developed by other teams. This is because the application that the other team was developing, we already knew that it was kind of going to be deprecated, but we needed a way to actually start using it fast and that we can complete our host application. So what we did is to use model federation to basically import hooks, import some providers that we needed to obtain information from the other app, and also the main component that we used to complete our application. So in this case, I'm going to represent our use case with a movie theater application where the seat beaker and the user profile were being developed by other teams. So as you can see, we are exposing the components from both. And on the host application, we consume them.

3. Full Federated Sites and Type Safety

Short description:

Two producer applications and one consumer. Manifest contains all the information and assets for the host application. High availability and error boundaries are important for Model Federation. Host and consumer applications have their own business logic and are independent. Types are extracted and saved as assets for type safety. Full federated sites allow teams to see their application in the host application during development.

So then we have two producer applications and one consumer. And as you can see, the application is just important, the URL of the manifest, right? And once you have updates that are made from the producers, let's make, for example, an update on the seat beaker. We don't have to redeploy the updates on the host application. The manifest already has everything that you need to consume the latest version. Right? So this manifest is the one that contains all the information and the assets that are going to be used from our host application. And this is what we use to import and basically complete our host application.

So what happens if this goes down or the URL dies for some reason? We have to make sure that these assets are being served in something like on high level availability like a CDN industry bucket or industry bucket so that we can make sure that we don't have downtime or at least diminish it as much as possible. And this is one of the drawbacks of Model Federation that the app is not complete at build time, so we might have errors on runtime in the browser. But we need to make sure that we implement these high availability options that we serve the assets and we make sure that they are healthy. And if the case of something happens, we fail gracefully with error boundaries in the worst case.

And as you can see here, once we import them and consume them, the application itself, I mean on the host, it is pretty normal to use it. The difference is that these are coming in runtime and as you can see the browser is rendering them as normal HTML. And also something really important to notice and that was also implemented in our use case is that both applications, even the host one and the consumer one had their own business logic and they were independent. Right? We didn't share the backend at all. And this is pretty important in microfrontends because we try to ensure the coupling and independence in the deployment. So having a backend or a BFF that is specific for each microfrontend is important to ensure independence. Now, let's talk about some features and things to take into account, for example, types. If we think about the manifest that we were seeing before, these are being consumed in runtime. Right? But what happens for type safety? Because we actually need it, right? To be able to see, I mean to have type safety itself, auto complete features and stuff. If this is runtime, we cannot do it because we don't know what are the types from the other team. Right? We just have a URL. So what we did in our team is to extract the types and create a script to extract the types from the other project and import it in ours. But this is because we had actually access to that repository and we were able to do it. But something that was implemented in Model Operation 2.0 is that we now have this TypeScript plugin and the Enhanced plugin that allow us to actually do sort of the same, right? Once we expose something in our model, the plugin automatically recognizes what we are exposing, extract the types, saves them in a folder, and now it is served as an asset for the host application to be able to download it, right? So when the host actually goes ahead and says, hey, I want this remote on my application, the plugin automatically recognizes and try to download that folder. Once the folder is downloaded, it automatically extract it. But we had to do some setup, as you can see here in our TypeScript config, to use it as fallback path, right? So if any of the types for any element in the project are not found in the local project itself, it looks for the types in that folder that was downloaded. And that is the way that we get auto-complete features and type safety for the federated models. Now, I want to talk about this concept that is called the full federated sites. Because when we were developing our application, the other team actually didn't have access to see how their application would work in our host application. I mean, they could have downloaded our application and actually start the service, see how it works, or wait for us to deploy the application and go to the dev environment to see it, right? But they couldn't actually see it on the development cycle.

4. Full-Federated Sites and Performance Optimization

Short description:

Full-federated sites create circular dependencies between applications, enabling fast development cycles and easy testing. Importing components from a federated model requires dynamic imports, dependency sharing, and performance considerations. Shared libraries should have defined versions and internal state libraries should be singletons. Debugging tools like Medusa and the Model Federation plugin for Chrome help identify and resolve issues. Consistent styling can be achieved by using the same UI kit. Routing can be handled by a shell application or frameworks like Single SPA. Stage management is important for tightly coupled applications, and techniques like probe drilling and query parameters can help decouple applications.

Like, for example, for the Boolean, something that was mounting inside another component or we started having those sort of issues, right? Also, something that happened to us is that since that application had a lot of complex setup for the backend and really, really complicated needs for it to be started, we weren't actually able to kind of use them both at the same time, right? So what we ended up doing is to create this federated model that was used as fallback. It had really basic functionality that helped us to see how our application would work as a whole, I mean, completed with the information obtained from the other app, right? So but again, it was pretty much like we actually didn't know how that application would work with the host one. So there is where this concept of full-federated sites could have been useful for us at that time, because this creates sort of like a circular dependency between applications so that the host one not only becomes a consumer but also a producer. So as you can see here, for example, in the theater application, the home app, it's going to expose itself, right? And when we go to the application, we can consume also the host application and we can create a second script to be able to start our application with the host one, right? So that gives us a really fast development cycle and it will be pretty easy to see how they both interact when, for example, they need to, they have something coupled or some state or something that actually needs to interact with each other. It will be easier to test and even easy to test or use for end-to-end tests.

Also, I wanted to talk about some things for example, performance. And I think it's important to take into account several factors that we have when we import components from a federating model. For example, the first one is that it is really important to use dynamic imports to reduce initial bundle size and also be able to import and load these models only when they are needed, right? Also, the waterstrap applications to give webpack the opportunity to process the rest of the imports before executing the app and avoiding race condition. The most important one I think is dependency sharing because not specifying shared packages can result in duplications and slow down the experience. And not defining versions between shared libraries can also cause errors between the applications. And if there are libraries with internal state, we have to make sure to make them as singletons to ensure that we have only one instance running, right? And that is where also comes debugging because when you start having performance issues due to shared dependencies, we can use tools like Medusa where we can have a better understanding of how our microfounders are connected and especially what dependencies we are using and which needs their attention. For example, here it is telling us that a specific dependency can be declared as shared. And also something really cool that was introduced in Model Federation 2.0 is that the Chrome tools now, for the Chrome tools we have now a plugin, a Model Federation plugin that we can install and see how our application is connected and how our models are being consumed. So that is pretty cool for debugging because if something fails, you can see also what other applications can be affected, right?

Also for styling, what we did to keep our application coherent is to use the same UI kit and that allows us to reuse built components, avoiding customization, right? As much as possible. So if we needed to change something, we change it in the UI kit library. That way we avoided presenting the user something like Comic Sans in one page and Times New Roman on the other. Right. Also, routing is really important to take into account. In our application, what we did is to use the concept of a shell application. That actually is pretty much the host. And the sole purpose of it is to orchestrate the microfront, right? So the routing only works inside the host application. When we can actually import the models with lazy loading, render them with necessary and basically handle the routing as we needed. Other options you can use also can be like frameworks, for example, Single SPA. Single SPA allows you to mount and unmount components to improve performance and load them in specific places or specific routes, right? Finally, stage management. We had a really tightly coupled applications. So stage management was used basically from the producer application because we actually needed to get the state from there. We used hooks, which made it kind of easier to do. But I would say that if you actually want to go all the way, and if we probably would have been using models, not only because of the strangled pattern, but different use cases, there are also things like, for example, we could have used like probe drilling, pool solve, query parameters, local storage, anything that could have avoided to basically couple the applications. And we always need to try to basically decouple them.

5. Decoupling and Growth Strategies

Short description:

Decoupling components using specific federated models and considering tradeoffs when designing application growth strategies. Module federation is useful for the strangling pattern and long-term application development. Determine if Microfrontends are necessary based on the need for independent modular components in a growing product.

And we always need to try to basically decouple them. We could have even tried to create a specific federated model for a state and use them in order to decouple them. That is also something to take into account.

And to give you some conclusions, I guess that when designing which approach to use when you intend to grow your application, you will find out that kind of everything is a tradeoff. Everything has its consequences and we have to accept the ones that benefit us the most and affects us the least according to our project, our teams, our applications. In our use case, I think that module federation was really useful, as mentioned, for the strangling pattern. And that actually helped us to build the application, the host one that was going to be there for the longer run, right, for the future, until we migrate the other one.

Now, if we ask ourselves if we should start directly with Microfrontends and any of its implementation, I think it's important to ask ourselves if we actually have the problems that Microfrontend solves. For example, we mentioned that the key feature is to have independently the place, right, for different parts of the system. So this means that our product has grown and a lot of the things that we are seeing on maintenance and stuff guides us to separate the problem into smaller pieces. So that is where I think that we can start thinking about it. And yeah, I think that is pretty much it. Thank you very much for joining me and I hope that you enjoyed it. Thank you.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
How React Compiler Performs on Real Code
React Advanced 2024React Advanced 2024
31 min
How React Compiler Performs on Real Code
Top Content
I'm Nadia, a developer experienced in performance, re-renders, and React. The React team released the React compiler, which eliminates the need for memoization. The compiler optimizes code by automatically memoizing components, props, and hook dependencies. It shows promise in managing changing references and improving performance. Real app testing and synthetic examples have been used to evaluate its effectiveness. The impact on initial load performance is minimal, but further investigation is needed for interactions performance. The React query library simplifies data fetching and caching. The compiler has limitations and may not catch every re-render, especially with external libraries. Enabling the compiler can improve performance but manual memorization is still necessary for optimal results. There are risks of overreliance and messy code, but the compiler can be used file by file or folder by folder with thorough testing. Practice makes incredible cats. Thank you, Nadia!
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.
Optimizing HTML5 Games: 10 Years of Learnings
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Optimizing HTML5 Games: 10 Years of Learnings
Top Content
Watch video: Optimizing HTML5 Games: 10 Years of Learnings
PlayCanvas is an open-source game engine used by game developers worldwide. Optimization is crucial for HTML5 games, focusing on load times and frame rate. Texture and mesh optimization can significantly reduce download sizes. GLTF and GLB formats offer smaller file sizes and faster parsing times. Compressing game resources and using efficient file formats can improve load times. Framerate optimization and resolution scaling are important for better performance. Managing draw calls and using batching techniques can optimize performance. Browser DevTools, such as Chrome and Firefox, are useful for debugging and profiling. Detecting device performance and optimizing based on specific devices can improve game performance. Apple is making progress with WebGPU implementation. HTML5 games can be shipped to the App Store using Cordova.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
Workshop
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
Building a Shopify App with React & Node
React Summit Remote Edition 2021React Summit Remote Edition 2021
87 min
Building a Shopify App with React & Node
Top Content
Workshop
Jennifer Gray
Hanna Chen
2 authors
Shopify merchants have a diverse set of needs, and developers have a unique opportunity to meet those needs building apps. Building an app can be tough work but Shopify has created a set of tools and resources to help you build out a seamless app experience as quickly as possible. Get hands on experience building an embedded Shopify app using the Shopify App CLI, Polaris and Shopify App Bridge.We’ll show you how to create an app that accesses information from a development store and can run in your local environment.
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Build a chat room with Appwrite and React
JSNation 2022JSNation 2022
41 min
Build a chat room with Appwrite and React
Workshop
Wess Cope
Wess Cope
API's/Backends are difficult and we need websockets. You will be using VS Code as your editor, Parcel.js, Chakra-ui, React, React Icons, and Appwrite. By the end of this workshop, you will have the knowledge to build a real-time app using Appwrite and zero API development. Follow along and you'll have an awesome chat app to show off!
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.