And so it knows based on the graph, which processes can be built in parallel, which basically where they may just assign a single task to an agent because it's a long running one and which one needs to be serially executed. And in the end, the DX part, it comes in here is that all the logs and artifacts are being collected and sent back to the actual main node and grouped together. And so from a developer perspective, if you get an error or want to look at the actual run, you can just go to the logs and you will see them the same way as if they would have been run on a single machine.
And it's even cooler because like all those tasks now are distributed across the agents. You here, you can see a screenshot of NX cloud showing the utilization of the agents, and you can see how they are balanced out also based on the previous data that NX cloud has gotten and therefore it knows how to best parallelize those tasks. And you get even like a nice visualization. So whenever a PR runs, you can in real time understand how many agents are currently running, which tasks are running on which agent, which is particularly important for debugging purposes.
Now, this was kind of the fast already a bit interleaved with the DX part, as we have just seen, like the visualization for instance, that helps you from a developer experience perspective to debug things. But a DX is very important in React, more in the monorepo scenario in general, because there's nothing like having a fast, super like, worst feature rich monorepo setup, but it's super hard to use as a, from a developer perspective or to configure. And so in NX specifically, the important part is the developer experience part. And so, first of all, things need to be incrementally adopted, right? So NX can be set up in two different main ways. So you can get started with a core setup, which means you don't use any plugins that come with NX, you just install NX and use it on your existing monorepo.
And so for migration scenarios, this is ideal because you can still use the same infrastructure that you had before, and NX will just make it much faster to run the tasks. So the thing you leverage is the Fast task scheduler, the caching, and also the distributed task execution if you use NX cloud, which you've just seen. The other approach is that if you start new, you can actually benefit from setting up NX with some pre-configured things. So you can say, okay, I know that I'm using a react monorepo setup because react is my main focus. So you can already use some pre-configured templates that NX comes with. And so then NX would make sure that you get like Jest setup, ESLint, Prettier, Cypress configuration set up for you. So you don't have to worry about a lot of that stuff. It's usually that is the best option if you start a new monorepo right now.
So for the core setup, it is super easy to adjust. You basically just run the npx add NX to monorepo command, which would add it to any npm, yarn, pnpm workspace, and just make it a lot faster. Interestingly, also, as I mentioned initially, we took over Seo Chip for Lerna, and now we can do some very interesting things, especially for your Lerna workspaces. For instance, right now, if you're using 5.1 plus of Lerna, you can just install NX and set the use NX to true on your Lerna JSON, and it would automatically defer the task scheduling to NX, making Lerna super fast without you having to change anything else, which is super important I think, from a developer economics.
Another thing is beautiful output. Now, this might not sound as important initially, but if you think like how often you look at the terminal outputs as a developer, and so in order to reduce the cognitive load, NX really just shows you what is most important right now. So it doesn't show you, for instance, the execution of depending tasks as in this animation here, but just what gets executed, unless, of course, some error occurs. That would be highlighted, obviously, big and then in red, and even if you rerun the task and they get cached, the output is exactly the same. So you basically have a much, much less lower cognitive load when you parse those logs because you just see what you need right now. Also ID integration.
Comments