Once we know how to invoke a file using each runtime, we can create a subprocess, for example, using Spawn. In this way, we can use Node.file.file.genu.run. And, in options, we can use a default for everyone. And, the same idea for TypeScript, for example. And, it will run not exactly a test. It will run a JavaScript file or TypeScript. And, what about CacheBank paradox? Well, we can just use a really basic logic to get a runtime, properly propagate it across all subprocesses. And, it will ensure each test runs across the same runtime that trawls our test run.
Here is an example of the test executor. See, we have a file.js. We identify our runtime, create a subprocess based on who is the runtime that will trawl our test run. And, ok, you just got it. And, simply, we solved our problem with TypeAnd, our VMs, our workers. Node, for example, uses the fork approach. It's really similar, but, in our case, it fits better because we need to run across different runtimes. And, Node will run always using Node, for example.
So, let's talk about performance now. So, we can find and list all our test files, create a concurrency, understand what is the better resource. For example, the CPUs available. Choose a way we will prevent overload. And, plus, for each subprocess that ends, it will disappear of the memory. As you can see, this is the before using the default way, and this is the new way with our test runner. So, in this example, you can see the scribe 1, 2, 3, 4, 5. And, look, on the right, it is exactly as in JavaScript. And, in a real example, you can see, on the left, a Node example. We need to prepare the connection we used before to create the connection. So, after, to end the connection. So, we can test. And, the assert for Node in this case, only is displayed if the test fails. And, for our test runner, it's like JavaScript.
Comments