React at Scale with Nx

Rate this content
Bookmark
NxCore can parse TypeScript or JavaScript code to automatically draw dependencies. For other languages, a plugin is needed to provide this automatic support. The code needs to be parsed to determine the relationship between different projects. NX can handle most tasks for you, but manual dependency management is also possible. Regarding Ben's question about pre-commit hooks, using the NX affected command can run specific tasks for libraries affected by a change. In lab nine, a new type library called util interface is created in the API folder and used in both the API and store. The API build task depends on the API util interface build task, ensuring the correct order of execution. The util interface is then moved to the top level to avoid importing from the API folder within the store application.It gives you all the benefits of the monorepo without the drawbacks of code collocation. Nx provides linting rules, generators, and plugins for consistent coding practices. It allows controlled code sharing and an accurate architecture diagram. Nx understands the dependencies between projects, ensuring accurate task execution and test running.Libraries can be categorized into feature libraries, UI libraries, data access libraries, and utility libraries. They have specific dependencies and can be nested under folders. Splitting code into libraries is a trade-off, balancing the benefits of code organization and the cost of generating more folders. Visualizing the project graph helps understand the repo structure. NxGenerators can be used for migrations to keep code in sync and update to the latest version. Running NxMigrate updates code and automatically migrates breaking changes introduced by plugins.It's a trade-off when it comes to having fewer or more libraries and applications. Creating more libraries allows for cacheable targets and the potential for running fewer tests. It also enables setting up constraints between dependencies. However, creating too many libraries can lead to overhead. It's important to have a reasonable number of libraries, with one component per library being too many. When making changes, it's best to have related components in the same library. The same trade-off applies to folder structure, where more folders allow for better organization but can make it harder to find related items.We're gonna talk about developing at scale with NxMonoRepos. A MonoRepo is a single repository that contains multiple distinct projects with well-defined relationships. A monorepo is great for atomic changes, sharing code easily, and having a single set of dependencies. Code collocation is where you just drop code together without having a monorepo tooling in place. One of the issues is running unnecessary tests, not having code boundaries, and having inconsistent tooling. Nx can help with these problems.We've discussed plugins and how they can help with code consistency and automation. We've added the NX React plugin and other packages. We've used the NX console to generate a store application with React and webpack. We've created a fake API.ts file in the store folder.Let's talk about executors and how they give you discoverable flags to run commands. We'll explore building the application and configuring the executor for production or development. The serve target uses the build target, allowing you to change options in one place. Executors are fully configurable.We're making calls to the games API and setting up a redirect using a proxy config JSON file. We update the files for the detail view in the store feature game detail. The libraries we've created are connected, but the API is disconnected from the store. In the next lab, we'll share types between the front end and back end by generating a new JavaScript library called util interface. This is lab eight, and we'll move on to lab nine.Let's set up CI using the effective command to run tests only on the affected code. The CI.yaml file is configured to run tests for everything affected by the PR. Effective allows running commands on specific changes. The fetch depth of zero ensures checking out the base commit as well. This way, NX can compare the filesets.This part explains the typical folder structure for Nx, including the apps and libs directories. It also mentions the dist directory for build code, the workspace.json and index.json files, and the tsconfig.base.json file for TypeScript alias paths. The part also mentions workspace level and project level configuration files for TypeScript and Jest. The first lab involves creating a new workspace using npx create-nx-workspace or yarn create-nx-workspace. The name given to the workspace is used for the directory, path alias, and NPM scope. The part concludes with instructions on creating the BG Horde workspace using yarn create-nx-workspace.We're gonna update the TypeScript and CSS. We modify app.tsx and the app modules SCSS. We copy the example images and serve the store to see the styling, material cards, and images. We have a basic setup with React, testing, and linting. Running the tests and NX Lint will work. We can also run N2N tests with NXETE store ETE.To run the generators and migration scripts, ensure that the latest packages are installed. Running nxmigrate run migrations checks the migrations.json file and executes the generators. It's important to have a clean git history when running generators or migration scripts. After running the migrations, you can delete the migrations file. The migrations file is useful in larger repositories where some generator scripts may not work or when merging PRs from multiple developers. The complete labs generator can be used to complete a single lab or multiple labs. The NxGenerate app can be used to create a new React library with the nx rollup executor.There was an issue with the configuration of Storybook, which was resolved by installing a missing package. Storybook was set up to render a component with default options, and there is a way to dynamically change the input using ARGs. Cypress tests can be set up to run against Storybook, and the generator can generate Cypress specs for each story. Tags can be added to projects to define dependencies between them, and rules can be set to control how projects with specific tags can depend on each other.We moved the API util interface to just util interface. The code and files were updated accordingly. The dependency graph shows that the util interface is being used by the store, the feature game detail, and the API. If different languages are used, manual edges can be added to establish a connection between projects. The next lab will demonstrate the power of generators by generating storybook stories for components.So when using the storybook configuration generator, make sure the versions are in sync. Use the UI shared project and generate the storybook configuration for store UI shared. Check the checkboxes and run the generator script. It will create files and a stories file. Set up an input for the generator to automatically create knobs in the story. Run the generator script again to recognize the input and create a knob. Install dependencies and serve storybook. Check the default configuration and make any necessary adjustments.We've set up rules for scope and type dependencies to ensure proper code organization and prevent unauthorized imports. Running lint confirms that the rules are being enforced. We also tested failing scenarios to ensure that the rules catch any violations. This automated process is especially valuable in larger repositories where manual reviews become impractical. Additionally, we discussed the caching feature in NXJSON, which improves the performance of linting operations. If you have any questions or specific topics you'd like me to cover, please let me know. If time permits, we can also discuss CI.We discussed the use of workspaces in a package-based repo and how they allow your package manager to understand that you have other package JSON files inside your repository. By setting the nx property in the npm script, you can add configuration options and specify dependencies between projects using the depends on property. However, by default, the dependency graph is based on the dependencies defined in your package JSON. To use the actual code for dependencies within the repository, you can enable the analyze source files flag in the nxjson file. This will create a dependency graph based on the source files. It's important to note that enabling this flag may reveal unexpected dependencies and circular dependencies. We're almost out of time. Any final questions?These executors need to be a specific executor that's available. Each executor has its own options that are available. NxConsole enables autocomplete for options based on the executor's metadata. Outputs and inputs can be specified, and default configurations can be set. Configuration names are arbitrary. Lab three is complete.We've applied tags to all nine projects. Now, let's modify the enforce module boundaries in the eslint file to add rules. By default, there is a rule that allows anything to depend on anything. We can run lint on multiple targets using nx run many. We have a rule that restricts scope store to only depend on scope store and scope shared. However, API is currently depending on util interface, which violates the rule.The basics of CI involve running tests on the specific projects that have been changed, rather than running tests on the entire repository. If you want to use executors, you need to switch to an integrated repo with a project.json file. However, if you have no need for executors, you can continue using a package-based repo. All caching and task pipeline setups will still work. You can set up dependencies between builds and tasks just fine. If you have any specific questions or need advice for your repo, feel free to ask.We created a new react component called header inside the libraries folder. We ran the migration to copy the code over and used the header in the app.tsx file. We refreshed the page and saw the header. We looked at the project graph to see the dependencies. We ran lab five to seven, which created more libraries and an Express API application. Lab five introduced a utility library using NxJS instead of NxReact. Lab six introduced a routing library using NxReact. Lab seven installed the NxExpress plugin and created an application with it.The API was created using an NxJsNode with an Express app and routes. The fake API.ts file was moved to the server side. The header and formatter library are used, with the format rating being defined inside util formatters. Lab 8 will connect the API and the front-end store application. Circular dependencies should be avoided, but can be disabled if necessary. Lab 8 involves deleting the fake API, using Fetch and Use Effect in FF86 component, and serving the API and store.

From Author:

We're going to be using Nx and some its plugins to accelerate the development of this app.


Some of the things you'll learn:

- Generating a pristine Nx workspace

- Generating frontend React apps and backend APIs inside your workspace, with pre-configured proxies

- Creating shared libs for re-using code

- Generating new routed components with all the routes pre-configured by Nx and ready to go

- How to organize code in a monorepo

- Easily move libs around your folder structure

- Creating Storybook stories and e2e Cypress tests for your components


Table of contents: 

- Lab 1 - Generate an empty workspace

- Lab 2 - Generate a React app

- Lab 3 - Executors

- Lab 3.1 - Migrations

- Lab 4 - Generate a component lib

- Lab 5 - Generate a utility lib

- Lab 6 - Generate a route lib

- Lab 7 - Add an Express API

- Lab 8 - Displaying a full game in the routed game-detail component

- Lab 9 - Generate a type lib that the API and frontend can share

- Lab 10 - Generate Storybook stories for the shared ui component

- Lab 11 - E2E test the shared component

This workshop has been presented at React Summit 2023, check out the latest edition of this React Conference.

FAQ

A MonoRepo, or monolithic repository, is a single repository that contains multiple distinct projects with well-defined relationships. This setup allows for easier code sharing, atomic changes across projects, and streamlined dependency management.

The benefits of using a MonoRepo include atomic changes, simplified code sharing, and a single set of dependencies. This facilitates faster development cycles, reduces integration issues, and simplifies project management across multiple teams or projects.

Atomic changes refer to the ability to make changes across multiple projects within a MonoRepo in a single commit. This ensures that all interdependent components are updated together, minimizing the risk of compatibility issues.

In a MonoRepo, code sharing is facilitated by direct access to the shared codebase within the repository. Teams can easily export and import functions or components across different projects without the need for separate package management.

Without proper tooling, a MonoRepo can lead to issues such as running unnecessary tests, lack of clear code boundaries, and inconsistent coding practices. These challenges can decrease efficiency and increase the risk of errors during development.

Isaac Mann
Isaac Mann
145 min
17 May, 2023

Comments

Sign in or register to post your comment.

Video Transcription

Available in Español: React a Escala con Nx

1. Developing at Scale with NxMonoRepos

Short description:

We're gonna talk about developing at scale with NxMonoRepos. A MonoRepo is a single repository that contains multiple distinct projects with well-defined relationships. A monorepo is great for atomic changes, sharing code easily, and having a single set of dependencies. Code collocation is where you just drop code together without having a monorepo tooling in place. One of the issues is running unnecessary tests, not having code boundaries, and having inconsistent tooling. Nx can help with these problems.

We're gonna talk about developing at scale with NxMonoRepos. So what is a MonoRepo? MonoRepo is a single repository that contains multiple distinct projects with well-defined relationships. So, you know, multiple different apps working together, or you could it could be one app but with multiple like sub projects inside of it. And then you need to have well-defined relationships between them. You can't just if you just dump all the code of two different apps in the same repo. We call this code co-location and it's a mess. And you you'll run into lots of problems with that. And let's talk about what those things are.

So a monorepo is great for lets you have atomic changes, lets you share code easily, and lets you have a single set of dependencies. So let me go into each of these and explain what they are. So atomic changes are, let's say you have an application that consumes a UI library. And if you have that UI library in a separate repo from your application, then the change workflow goes something like this. Say you make a change to the UI library that breaks a test in the application. So you have to publish that UI library, and then at some point later, the application developer bumps up their version of the library and realizes, hey, you broke my application. So then they tell you about it, or they file an issue about it. And then the library developer has to come back and say, okay, I'm gonna fix that bug and then publish a new version of it that will fix the bug. And then later, a few days later, the application developer bumps their version again and says, okay, that fixed it. So that whole life cycle, that's a minimum of a week probably, of actual real developer time. Before the change is made to it's actually fixed and corrected. Whereas if they were the same repo, then the library developer would just run the tests before they even make the commit. They would just run the test and say, wait, I broke that app, I'm gonna fix it. So it goes from a week cycle time to like 30 minutes cycle time and you're not doing that contact switching, going back to this thing you worked on a week ago to finally fix it. That's one benefit of a Monorepo. Second benefit is sharing code. So if you have some user logic to validate whether a username is valid, and you want to reuse that logic across multiple applications or multiple subsections of your app. If you were gonna do that in separate repos, you'd have to publish that and keep the version numbers in sync. Whereas in a monorepo, all you need to do to share that logic is to export a function and then just reuse that function wherever it's needed. So it makes it dead simple to keep that logic in sync. If you wanted to change this logic somehow, you just update that function and everywhere instantly uses the new updated logic. The other benefit is having a single set of dependencies. So, say if you have multiple different versions of your framework of Angular or react, that can cause weird, strange bugs. If you have a library that's on an old version of react and the applications on a newer version of react, there can be like, hard to debug. Hard to debug runtime errors that can be caused by that. The other issue is that whenever you have multiple applications on different, well, you have multiple applications, usually there's one application, that's the main thing that gets worked on all the time. So that's going to be at the latest version of React. Version of the framework, but then if you're going to have another application that's you know you updated every three months or so maybe whenever you get around to it, and then whenever you go to update that, it's always a pain. Because you have to remember, okay, what were the tricky things about upgrading to that version of React that was six months ago, or a year ago. And you have to go through all the same pain points that you did the first time, but 12 months later. Whereas it's a lot easier to do all the application upgrading at the same time. Because you're solving the same problem in 10 different places, instead of solving the same problem in one place, so it's not that hard. Whereas if you do it 10 different times over the course of a year, that's really painful.

Code collocation is where you just drop code together without having a monorepo tooling in place. So one of the issues so you can have running unnecessary tests, not having code boundaries and having inconsistent tooling. So unnecessary tests, let's say you change the products homepage project, and that depends on a UI library. So if you make changes to the products homepage, there's no way you broke the test for the product UI library. So there's no point running those tests, but without your tooling knowing about how this dependency graph, how that dependency actually works. There's no way for your tooling to say, these tests you need to run, these tests you don't need to run. So you need to have something that knows this dependency graph. And so you could theoretically do that yourself. You say I know that this test needs to run, these other tests don't. But a normal repo that has multiple applications, the product graph looks more like this. Or even, I've seen graphs that have thousands of nodes in it like this. And so there's no way you can do that in your head to make sure you get all the tests done correctly every time. So you want to have a tool that can do this for you. So you make sure you run all the tests you need to run, but not any of the tests that are guaranteed not to have broken. The other thing is code boundaries. Let's say you have your code in a shared repo, and you have some function that you're playing around with. It's intended for internal use within your project, and you don't want other people to use it, because it might change frequently, right? But somebody reaches in and starts using that function in their application. And then at some point later, you change it. You've broken their app, and they're upset at you. And so, now you're forever on the hook for keeping that function with the same API, or fixing their code wherever it's using it. So there needs to be some way of saying, these functions are available for you to use. These functions are not available for you to use. So that you can clearly say, here's the boundary, and this is my stuff, internal stuff. This is public stuff. And the other thing is inconsistent tooling. So every application, every project has, you put in NPM script, you have all sorts of weird flags and scripts. And so, whenever you're going into new code base, you have to figure out, what does this mean? And why would I ever run this? And it's really hard to know that in each new application. So you need to have some way of making those things discoverable and well-documented. So this is what Nx can help with.

2. Benefits of NxMonoRepos

Short description:

It gives you all the benefits of the monorepo without the drawbacks of code collocation. Nx provides linting rules, generators, and plugins for consistent coding practices. It allows controlled code sharing and an accurate architecture diagram. Nx understands the dependencies between projects, ensuring accurate task execution and test running.

It gives you all the benefits of the monorepo without the drawbacks of code collocation. So it can give you faster command execution, gives you controlled code sharing, consistent coding practices, and an accurate architecture diagram. So faster command execution. There are executors, which help you run whatever tasks you need to run against your code, the build, test, and let, those kinds of things. Nx Affected allows you to run commands only on projects that were affected by a code change and not anything that wasn't affected. And Local and Distributed Caching also speed up your average runtime in CI or locally by saying if the inputs for this task haven't changed, then I know the output is already what was done before. And so it just pulls it from the cache instead of rerunning. And Distributed Caching allows you to share that cache across your whole organization instead of just locally on your machine. Controlled Code Sharing, you can set up an API for your projects and say these are the functions that are public for anyone else to use and these are anything that's not exported in the API file, that indexed TS file is private so you can't use it. You can set up tags to say these particular types of projects can depend on these other types of projects, but other ones cannot. And so whatever structure you need to make for your own organization, you can say these types of projects can be used within this team and these other projects are shared for anyone to use. So whatever kind of structure you need to make with those tags, you can do. You can also do publishable libraries to publish to NPM. You use a code owners file, which is like a Git feature which says if you make changes within these folders, these particular people have to approve the PR before it can merge. So all those things are useful for managing a large repo. Consistent coding practices, so Nx provides linting rules, generators let you generate code and modify code. So keep things up to date and add new code in a consistent way. There are narrow provided plugins from Nx which are official, and then there are also community plugins. So anybody can write a plug in and publish it and say, this will give you support for some tool that Nx hasn't supported officially. And then you can have an accurate architecture diagram. So Nx understands how the different projects relate to each other. Not just the way you think they depend on each other or the way you wish they depended on each other or the way they depended on each other six months ago. It's what the code is saying about how these projects depend on each other, which is invaluable for really having an accurate view of which tests actually need to be run and how projects need to, which tasks need to be run before which other tasks those kinds of things.