A deep dive into how we built support for running Vitest tests in the Cloudflare Workers runtime. We'll start by giving an overview of the Cloudflare developer platform, including our open-source workerd JavaScript runtime and local simulator Miniflare. Next we'll talk about how Vitest works and provides support for custom runtimes, using Node.js as a driver to run tests in another environment. We'll describe the specifics of our custom Vitest pool, and how we added support for dynamic code evaluation to our runtime. Finally we'll talk about how we improved developer ergonomics with isolated per-test storage, test helpers for accessing Durable Object instances directly, and support for declarative HTTP request mocking., and how we created a service to build types for users' specific compatibility settings.
Testing Alternative Runtimes with Node and Vitest
This talk has been presented at Node Congress 2024, check out the latest edition of this JavaScript Conference.
FAQ
Cloudflare Workers are a platform where you can write HTTP handling code, publish it, and get a URL to run it. The runtime provides standard APIs similar to a web browser, and the code is deployed to all Cloudflare locations for low latency access with practically no cold start time.
Brendan is a systems engineer at Cloudflare focusing on workers' dev tooling. He created Miniflare, a fully local simulator for Cloudflare Workers.
Miniflare is a Node.js library that provides a JavaScript API on top of Cloudflare's custom V8-based runtime called workerd. It also offers local simulators for the rest of the Cloudflare developer platform.
You can test Cloudflare Workers using integration tests that start a local instance of the runtime, send HTTP requests, and assert on the responses. Alternatively, you can write unit tests that import functions directly and assert on their return values.
VTest is a popular testing framework in the JavaScript community known for fast reruns with hot module reloading, snapshots, and mocks. It uses a Node.js host to orchestrate tests and report results, and a pool that spawns threads or processes to run tests.
Cloudflare uses a local-only unsafe eval binding to allow dynamic code evaluation during tests while retaining security guarantees. For module loading, they added a module fallback service that handles unresolved imports by making HTTP requests to fetch the necessary modules.
Durable objects in Cloudflare Workers are instances of JavaScript classes distributed around the world, each with a unique ID and persistent storage. They are useful for scenarios like collaborative document editing, where each user can interact with their own instance of the object.
Cloudflare uses a stack-based approach for isolated storage in tests. Writes to storage in a test are undone at the end of the test to ensure other tests are not affected. This is managed automatically by the testing framework.
Yes, VTest supports running tests in parallel by using a node worker thread pool. For Cloudflare Workers, tests can run in parallel across different projects or worker threads, but isolated storage may limit parallelism within a single project.
The Cloudflare test module provides helpers for creating instances of special classes, listing durable objects, running durable object alarms, and applying migrations to D1 databases. These tools help in writing high-quality tests for Cloudflare Workers.
1. Introduction to Cloudflare Workers and Testing#
Welcome to my talk on testing alternative runtimes with Node and VTest. I'm Brendan, a systems engineer at Cloudflare. Cloudflare workers are a runtime for deploying HTTP handling code. We use a custom V8-based runtime called workerd and a node library called Miniflare. Let's talk about tests. We have a new system that supports both integration and unit tests.
Hello, everyone. Welcome to my talk on testing alternative runtimes with Node and VTest. This is going to be a deep dive into how we built the Cloudflare workers VTest integration, but the techniques will be applicable to other runtimes, too.
I'm Brendan. I created Miniflare, a fully local simulator for Cloudflare workers. I'm now a systems engineer at Cloudflare, specifically focusing on workers' dev tooling. I've mentioned Cloudflare workers a few times already, but what are they? You write some HTTP handling code, publish it to our platform, then we give you a URL to run it. Our runtime provides standard APIs similar to what you'd a web browser. We deploy your code to all Cloudflare locations, so your users get low latency access wherever they are. Importantly, there's practically no cold start time. Our runtime is based on V8 isolates, not containers or virtual machines.
If you're interested in how we do this, there's a great talk from the architect of Cloudflare workers, Kenton, called Fine Grain Sandboxing with V8 Isolates. It should be one of the first results if you Google it. In addition to standard web APIs, we also provide runtime APIs for accessing the rest of the Cloudflare developer platform, for example, accessing key value storage. Here, storage is a binding to a worker's KB namespace. We can get, put, delete, and list values as you'd expect. And there are other types of bindings too. So, things like blob stores, SQLite databases, and other workers as well. We use a custom V8-based runtime called workerd to run your code. Then we build this thing called Miniflare, which is a node library providing a JavaScript API on top of workerd, and it also provides local simulators for the rest of the developer platform.
Now that we've explained the basics of workers, let's talk about tests. Say we have this worker that adds two numbers together and we'd like to write tests to verify its behavior is correct. There are a couple ways we could do this. We could either write an integration test that starts a local instance of our runtime, sends an HTTP request to it and asserts on the response, or we could write a unit test that imports functions from a worker directly and asserts on their return values. To support unit tests, we need to access workers' runtime APIs inside the test runner so the function runs with the correct global scope. We provide a few different ways of doing this today, but they will have their limitations. This talk is going to explain how we build a new system that supports both of these types of tests. Before we move on, let's review the key points we've just covered. We have a custom V8-based runtime that implements mostly web standard APIs like browsers, so things like fetch, request, response, web crypto. We also implement some nonstandard APIs, specifically for the server side use cases, like connecting to databases, for instance.
2. VTest Overview and Implementation#
Today, we'll cover how VTest works, dynamic code evaluation in the runtime, running VTest in the worker's runtime, improved developer experience, and declarative request mocking. VTest is a popular testing framework with fast reruns, and it uses a node worker thread pool by default. To run tests inside workers, we run the test runner inside workerd and use WebSockets for communication. VTest provides an API for implementing custom pools.
Without the introduction out of the way, let me give a summary of what I'm going to be talking about today. To start, we'll cover how VTest works and how we can change where it runs tests. Then we'll look at how we added support for dynamic code evaluation to the runtime. Next, we'll look at how we put those primitives together to run VTest in the worker's runtime. After that, we'll look at ways we've improved the developer experience, starting with isolated partase storage. And then we'll take a quick look at helpers for calling durable objects methods directly, and finally, we'll finish off by focusing on declarative request mocking.
So, let's begin with an introduction to VTest. VTest is a very popular testing framework within the JavaScript community, providing very fast reruns with hot module reloading, snapshots, and mocks. There's a host that runs a node and provides CLI, sequences, tests, and reports output, and a pool that spawns threads or processes to run tests. By default, VTest uses a node worker thread pool to provide isolation and parallelism for test runs. But because the worker thread runs in node, your tests have access to built-in node modules and globals and all that stuff. To run your code and other dependencies, VTest uses a package called Vite node, and this transforms your code with Vite and dynamically executes it in a worker thread. If we want to run tests inside workers, we need access to workers runtime defined in our worker runtime. There's a slight problem here, though, in that we can't really reference JavaScript classes across a process boundary. Instead, our approach is to run the entire test runner inside workerd and use WebSockets for communicating between the node host and workerd process. How do we actually do this? Fortunately, VTest provides an API for implementing custom pools, pools that export a function returning an object with a run tests function. The run tests function accepts an array of project files spec two pools to run. And any files that should be invalidated because they've changed. To demonstrate what this would look like, let's build a simplified workers thread pool.
3. VTest Worker Thread and Test Runner#
We create an RPC channel and use a message channel for communication. Data is assembled and passed to the new worker thread. In the worker, VTest's worker script is imported and the run method is called. Promises are waited for and resolved to report any errors. VTest uses a node host to orchestrate tests and report results, and the test runner code is designed to run in a worker's environment. VTest also uses dynamic code evaluation inside the test runner.
So for each spec channel, spec two pool, sorry, we create an RPC channel and we use a message channel for communication here, but we can switch this out for a WebSocket. The create methods RPC function is provided by VTest and returns an object with functions for fetching modules from the node server and reporting test results.
Next, we assemble data to pass to the new worker thread. We pass the test file that VTest should run in this worker and the project configuration. We also pass the worker side of the message channel. Now we can actually start a worker. In reality, you'd use a worker pool implementation here, like Pacino or TinyPool. But we're just using plain workers from the worker thread's built-in module. We want to keep this simple.
In the worker, we import VTest's worker script and call the run method on it. We can start a process here instead of a new worker. And in our case, we'll actually be starting a worker process here. But you could start any other runtime here as well or connect to a different machine or do something completely different. One of the VTest pools actually does type checking. You don't actually have to run the test files as JavaScript.
Finally, let's wait for a result message from the worker and install these promises in the array. Then we can wait for all of the promises to resolve and report any as an aggregate error. And with that, we have a basic thread spool. We can use it by specifying the pool property in our VTest configuration. Note that we can also pass options to the pool with the pool options property. Running MPXVTest will use our pool and run our test successfully.
So, let's overview what we've covered so far. The key points are VTest uses a node host to orchestrate tests and report results. Pools decide what to do with test files. In this example, we've run test files in a node worker thread. But for the workers integration, we'll want to run them in worker processes instead. The test runner code that VTest uses is designed to run in node, but we'll be running it in a worker's environment instead. Cool. So, I mentioned that VTest uses dynamic code evaluation inside the test runner. Let's explore that in more detail.
4. Custom Pool and Dynamic Code Execution#
The messages between the pool and the worker include RPC methods fetch and resolve ID. Fetch bundles code at a specified path using VT for dynamic execution in the worker. Resolve ID returns the file path to a module for dynamic import. We need to execute JavaScript code from a string and import modules from disk.
If we go back to our previous custom pool, let's look at the messages being sent between the pool and the worker. We'll see there are a few different RPC methods. The two most relevant for us are fetch, which transforms a path with VT in the host. So, it essentially bundles all of the code at that path using VT. Then returns the results to be dynamically executed in the worker. Note, these imports have been replaced with special calls to VT functions. This is how VT implements hot module replacement boundaries in module mocking.
The other relevant method is resolve ID, which takes a specifier and a referrer, then returns the file path to a module. If this is in the node modules directory, VTest will try to dynamically import this. With this in mind, we have two requirements. We need to be able to execute arbitrary JavaScript code from a string, and we need to be able to import arbitrary modules from disk.
5. Dynamic Code Evaluation and Module Fallback#
For security reasons, we added a local only unsafe eval binding to enable controlled dynamic code evaluation. This capability-based security allows safe execution of code relying on eval. Although this approach works for JavaScript code, it cannot evaluate ES modules directly. To handle module loading, we introduced a module fallback service that resolves imports by making HTTP requests. This service provides a fallback mechanism for unresolved imports and supports node-like resolution with the ability to detect named exports.
For security reasons, the workers runtime explicitly disables dynamic code evaluation in production, so we need to figure out a way to do this while retaining these security guarantees. To solve this problem, we added a new type of local only unsafe eval binding. This allows you to call indirect eval or the new function constructor or a couple of other things as well. By adding this as a binding, we can control which code has access to eval. This is very important for tests, as we don't want code that relies on eval to pass tests locally but fail when deployed.
We can see in this example, we're passing the unsafe eval binding to the evaluate function, but not the double function. So, only evaluate can call eval. This is a very basic form of capability-based security if you're familiar with that. This works great for JavaScript code, but unfortunately, you can't eval ES modules, so we couldn't use this to implement module loading from disk, unless we built our own module loader in JavaScript, but we'd reuse WorkerD's logic for this. Usually, WorkerD requires all modules to be specified up front, which makes sense if you're operating a functions as a service platform, where you'll be uploading code to the platform.
Whenever a module imports another module, we resolve the specify relative to the refer URL, and look for that in the registry. In this case, the specifier is math add, and the referrer is forward slash index. We find the module in the registry, make sure it's initialized, and return it. However, if we add an import to a module that doesn't exist, this will fail at runtime. For the VTestWorker, we don't know what modules we'll be importing until we start running tests. We need some way to define a fallback if the module isn't found in the registry. And that's exactly where the module fallback service comes in. This is a new thing we added which, if enabled, unresolved imports become HTTP requests, including the specifier and referrer. We also include whether this was an import or require, as these have different semantics. The service can then respond with a module definition or a redirect to a different specifier if the resolved location is different from the target. Requires a synchronous, so these will block the main thread until the service responds. This is sort of like how modules work in the browser. Our implementation instead sort of implements node-like resolution with support for automatically detecting named exports in common JS modules, like Node. And with that, we have everything we need for dynamic code evaluation. We can run arbitrary code with unsafe eval bindings, and we can import arbitrary modules with the module fallback service. These features enable many other use cases for us. We're effectively building a custom VT runtime here, which could be used in the future for meta framework dev servers, which will allow you to have hot module reloading for your server code while still having access to all of the CloudFlare bindings. Now that we can run and import arbitrary code, the next step is to get VTest's test runner. This worker does some background work, then logs to the console. Except, if you run it, it doesn't.
6. Request Context and Durable Objects#
Every incoming request in Workers has its own request context, which is disposed when a response is returned. To fix this, we need to extend the request context's lifetime for background work. Fortunately, durable objects provide distributed JavaScript class instances with unique IDs and persistent storage. These instances can be used for volatile in-memory storage and are similar to actors. Durable objects allow for the construction of collaborative applications with instances located closest to each user. The future goal is to enable durable objects to move across data centers.
If you send a request to it, you never see the console log. This is because every incoming request in Workers has its own request context. Asynchronous IO, like fetch requests or timeouts needs to be performed within a request context. And when a response is returned, the request context is disposed and all pending async IO is cancelled. In this case, the set timeout is automatically cleared. This limitation allows us to improve overall performance of Workers.
To fix this, we need to extend the lifetime of the request context until the background work is complete. This is quite problematic for tests, which have lots of async operations. We don't want to have to wait until, like, we don't want to have to add this wait until thing to all of them.
Fortunately, we also have these things called durable objects. Durable objects are essentially instances of JavaScript class distributed around the world. Each instance has a unique ID and its own persistent storage. Instance properties can be used as volatile in memory storage. If you're familiar with actors, it's kind of like those.
As an example, you might define a durable object class for a user in your application and each user would have their own instance of the object located closest to them. Let's walk through an example. Say we're building a collaborative document editor. We'll define a durable object class for a document. This is a regular JavaScript class with a special fetch method for handling requests. If I use the new unique ID function, and because I know this ID is unique, I know that no other object exists in the world with that ID. So I construct an instance of the class in the data center closest to me.
Now let's say my colleague in the US wants to create a document. Instead of using new unique ID, they call the ID from name function. Because we don't know whether an object with that ID already exists, the system first has to do a global lookup. Now that we're confident the object doesn't already exist, we can create a new instance again in the closest data center to my colleague. And finally, let's say I have another colleague in Australia. They'd like to access my document, so I share the ID with them. They use the ID from string method to get a durable object ID and connect to my existing instance. Because they're on the other side of the world, their latency is quite high. We haven't implemented this yet, but in the future, we're hoping to allow durable objects to move.
7. Integration Tests and Durable Object Context#
Durable objects reuse the same request context, allowing for the execution of tests without worry about wait and tills. The integration tests for Cloudflare utilize a test module that returns the current environment. By importing worker handles and calling them directly, the behavior of the tests matches production. Users can configure a main entry point for hot module reloading, ensuring modules are invalidated and tests rerun with new code.
This would look something like this, in this case. It might move to India to be closer to my colleague. Let's look at what the code might look like for a durable object.
Fitting a collaborative editor into a slide is tricky, so let's stick with a simple request counter. Whenever a request comes in, we hash the URL into a durable object ID and we get a stub to that object and forward requests to it. The counter instance will increment and return its new count.
The important thing for us is that durable objects reuse the same request context for all requests and auto extend their lifetime. By executing the test run inside a single durable object, we don't need to worry about calling waitUntil. This is great for running requests like WebSocket connections. What we want is a durable object that terminates a WebSocket connection, imports the VTest worker script, and uses WebSocket messages for RPC.
This is a durable object that does that. We'll start by creating a WebSocket pair, which is a nonstandard workers API, kind of like MessageChannel, but for WebSockets. You keep one half and return the other half in the response. Then we extract the data for this worker from a header using a more advanced version of JSON.parse that supports structured types with circular references. We use a custom WebSocket message port class to adapt a WebSocket with a message port like interface. Again, with support for structured serializable types.
We're still trying to run code designed for Node here, but luckily we have this Node.js compatibility flag which enables support for a subset of Node's built-in modules and then by polyfilling a few more built-in modules with the module service, we're able to get VTest running in WorkerD. This enables us to run tests that import basic functions, call them, and assert on their return values, which is a great start, but what about those env and context parameters from earlier? How do we write tests that depend on them?
For this, we'll define a Cloudflare test module that gets returned by the module fallback service. We use a module-level variable for storing the current environment and export a function for setting this. Our Cloudflare test module reexports a subset of an internal module so users can't mess around with hidden things. The env variable is set using the second parameter to our runner objects constructor, and with that, we kind of have integration tests. We're able to import worker handles and call them directly.
Remember what I said about the request context earlier? The test runner is running inside a durable object context so we don't have to worry about wait and tills. Because we're calling worker handlers inside that context, they also don't have to worry about wait and tills, which isn't really what we want from an integration test. We'd like the behavior to match production. Instead, we allow users to configure a main entry point. In the worker implementing the runner object, we define a handler that imports the entry point with VT and forwards the call on to the handler.
Because we're importing with VT, we get hot module reloading for free. When main or one of its dependencies change, modules are invalidated and tests rerun with new code. We then define a worker binding to the current worker in the test module named self.
8. Durable Objects and Developer Experience#
Using this gives us a more ergonomic API for integration tests. Durable objects are powerful and provide everything needed to write unit and integration tests for workers. The upcoming sections will cover additional functionality to improve the developer experience.
Using this gives us a more ergonomic API for integration tests. You don't need to worry about passing EMV or CTX. The cool thing about this is that the worker is still running in the same global scope as tests. If we were to import the main module in this file, it would be exactly the same instance as used by the wrapped handler.
To summarize, durable objects are insanely powerful. We're barely scratched the surface of what's possible here. We define a Cloudflare test module with helpers for accessing bindings, creating execution contexts, and performing integration tests with self. So, we basically have everything we need to write unit and integration tests for workers now. The next few sections are about how we added some additional functionality to improve the developer experience.
9. Isolated Per Test Storage and Seeding Data#
Tests should be self-contained and runnable in any order. Writes to storage need to be undone at the end of each test. It can be tricky to manually keep track of all writes and undo them. With the testing framework, writes to storage are automatically undone. Seeding data is supported using a stack. Each test gets its own storage environment that is automatically undone for the next test.
Let's start with isolated per test storage. Most workers applications will have at least one binding to a Cloudflare storage service. Ideally, tests should be self-contained and runnable in any order or on their own. To make this possible, writes to storage need to be undone at the end of each test, so reads by other tests aren't affected.
While it's possible to do this manually, it can be tricky to keep track of all writes and undo them in the correct order. Take the following two functions that get and push to the provided list. Ideally, we'd write tests like this, using the same key in both tests. Note that the push of two to list A has been undone before the second test. When we get list A in the second test, we don't see two in there.
We can implement this with some additional code before our tests that stores the value of each list before the test, then restores it afterwards. This is quite a lot of work, though. It's slightly easier with the recently introduced on test finish hook w test, but we still need to remember which keys were written to or enumerate them at the start and end of the tests. You'd also need to manage this for KVE, r2, caches, and any other storage service you use. This is all error prone. Ideally, the testing framework should manage this for you. Any writes to storage performed in a test are automatically undone at the end of the test.
To support seeding data including those in nested describes blocks, we use a stack. Whenever we enter a test or describe block, we push a new stack frame. All operations write to the topmost stack frame. To demonstrate this, let's simplify the helpers from earlier to always read and write from the same list. We start with a single root stack frame. All top-level before rules were write to this frame. When we start a test, we push a new frame onto the stack. This means each test gets its own storage environment copied from the parent. All write operations go to the topmost frame. In this case, the before each writes to the test's frame. As do any operations in the test itself. When a test finishes, its frame is popped and we create a new frame for the next test. This means that all changes performed in that test are automatically undone for the next test. Test B will write to its frame and then we'll get that popped when the test finishes.
10. Testing Describe Blocks and Durable Objects#
For describe blocks, the same process applies. We implement isolated storage using an on disc stack of SQLite files. To trigger pushing and popping, we use a custom V to test runner. We wrap durable object classes in a similar way to regular handlers and add a special request handler to run arbitrary functions in the durable objects context.
For describe blocks, the same process applies. When we enter the describe block, we push a new frame to the stack. All before rules in the describe block will write to this frame and note also that all before rules run before before each is. When we start a new test, we push a new frame to the stack, then run top level before each is, then the described level before each, and then the test itself. Note all of these operations write to the test's stack frame. So when the test completes and the frame is popped, all writes are undone. The next test will do exactly the same thing. And then when that test finishes, we'll pop the top frame. Then when the describe block finishes, we'll pop the next frame until we're left with the root frame again.
Miniflare implements simulators for storage services on top of durable objects with a separate blob store. To implement isolated storage, we implement an on disc stack of SQLite files. Blob stores are implemented as a separate store and are retained through stack operations. These get cleaned up at the end of test runs. While this works, it involves copying a lot of SQLite files. We'd like to explore using SQLite save points for a efficient solution. To trigger pushing and popping, we use a custom V to test runner, which allows you to hook into different V test life cycle hooks. We push the stack before running a described block or starting a test attempt and pop the stack after finishing a test attempt or described block.
I mentioned durable objects a few times now. Let's think about how we would actually test them. We want to be able to unit test durable objects like regular JavaScript classes. This means calling and spying on methods and accessing properties. It would also be useful to have direct access to storage for seeding data and asserting on side effects. Unfortunately, we can't just construct instances of the class. This would break the uniqueness guarantees of durable objects. Durable objects also have fairly complicated implicit locking behavior, which would need to be reproduced by calling methods. You must be in its request context. To solve all of these problems, we wrap durable object classes in a similar way to regular handlers like we did for integration tests with self. Then we add a special request handler to run arbitrary functions in the durable objects context. Let's see what this looks like. In the Cloudflare test module, we define a run in durable object function that accepts a durable object stub and a function to run.
11. Durable Object Request Handling#
This function expects the instance and persisted state as arguments. We store the function in a module level map and send a request to the durable object with the ID we stored it as. In the durable object request handler, we check whether the ID was set.
This function expects the instance and persisted state as arguments. We store the function in a module level map and send a request to the durable object with the ID we stored it as. In the durable object request handler, we check whether the ID was set. If it wasn't, we forward the request on. If it was, we extract the function from the module level map and run it in the durable object's context. Note we're using the same map as before and relying on the user's durable object running in the same islet as the test runner. We store the result of the function in the map and then back in the main function, we read and return this back to the user.
12. Handling Outbound Fetch Requests#
This addresses all of our requirements from earlier. We can now call methods and properties on instances directly. We have direct access to storage for seeding data. Importantly, the callback function is executed in the durable object's request context. Finally, let's quickly cover how we handle mocking outbound fetch requests. Most workers will make outbound fetch requests. It's useful to mock responses to these requests so you don't have to test against production or spin up additional testing servers.
This addresses all of our requirements from earlier. We can now call methods and properties on instances directly. We have direct access to storage for seeding data. Importantly, the callback function is executed in the durable object's request context.
Finally, let's quickly cover how we handle mocking outbound fetch requests. Most workers will make outbound fetch requests. It's useful to mock responses to these requests so you don't have to test against production or spin up additional testing servers. In MiniFlare, we allow you to specify an Indici mock agent to route through. The mock agent class provides a declarative interface for specifying requests to mock and the corresponding responses to return. This API is relatively simple, whilst being flexible enough for advanced use cases.
To implement this in our V test integration, we bundled a stripped down version of Indici containing just the mock agent code. In our special bundle, we expose hooks for setting the global dispatcher. We then monkey patch the global fetch function to pass through the mock agent if it's enabled. Let's look at some examples of how this works.
When the mock agent is disabled, we bypass it completely and forward requests to the regular fetch function. If the mock agent is enabled and the request matches one of its interceptors, it will return a response. If the request doesn't match an interceptor, it will pass through to the original fetch function. Unless we call the disable met connect function on the mock agent, which prevents network calls if requests aren't mocked. In this case, the user would see an error.
As I mentioned, we implement this with a custom bundled version of Indici. We use a week map to store the original request for the original fetch function if the request isn't mocked. Similarly, we use another week map to store a response from the original fetch function that we can return later. Then we monkey patch the global fetch function. We check whether the agent is actually enabled and pass through to the original fetch function if it isn't. Then we convert the standard request object into an Indici friendly format and store the original request in the week map keyed on the Indici dispatch options. Then we define Indici dispatch handlers that record any mocked response we receive. These functions won't be called if the request isn't matched by the agent. Then we define Indici dispatch handlers that record any mocked response we receive. These functions won't be called if the request isn't matched by the agent and calls the original fetch function.
13. Handling Outbound Fetch Requests - Conclusion#
These functions won't be called if the request isn't matched by the agent. We define the on complete handler for the full mocked response or the original fetch function's response. We cover how VTest works, customizing its behavior, dynamic code evaluation, running VTest inside the worker's runtime, isolated storage, calling durable objects directly, and declarative request mocking with Indici. Try out Cloudflare Workers and find the overview and code on the Cloudflare blog and GitHub repo.
These functions won't be called if the request isn't matched by the agent and calls the original fetch function.
After, we define the on complete handler. This will be called when the full mocked response is returned or when a response is returned from the original fetch function. If the original function was called, we can get the response from the week map, again, keying on the dispatch options. Otherwise, we construct a standard response object from the mocked returns. Finally, we dispatch the request through the mock agent and return the response promise.
And with that, we're done. We've covered how VTest works and can customize its behavior with custom pools, how we added support for dynamic code evaluation to our runtime, using those primitives to run VTest inside the worker's runtime, how we improved developer experience with isolated storage, calling durable objects directly like JavaScript classes, and finally, declarative request mocking with Indici. I'd encourage you to try out Cloudflare Workers if you haven't already. You can find a written overview of this talk on the Cloudflare blog and all the code for the integration is in the Cloudflare Workers SDK GitHub repo.
QnA
Introduction and Cloudflare Workers Experience#
Thank you all for listening. The majority of people haven't used Cloudflare Workers before, but they should definitely try it. Brandon talks about his experience with Cloudflare Workers and why he chose it. He also mentions the simplicity and integrated database products. A question from the audience asks about using SQLite Save Points for implementing isolated storage, and Brandon explains how it works.
Thank you all for listening. I'm super happy to have you here with us, Brandon, today. And first of all, let's take a look at the poll question. So I have to remind myself here a little bit. You asked, have you used Cloudflare Workers before? And let's bring up here the results. And this is interesting. The overwhelming majority, 73 percent, says no. Are you surprised by that, Brandon? Yeah, I'm a little bit surprised, but also not. I guess in the grand scheme of all of these platforms, it's relatively new. But I think people should definitely try it out. A lot of cool stuff you can do with it. A lot of stuff we've recently launched to develop this week for Cloudflare. So we've got lots of new announcements, things that people should definitely try. We talked a little bit backstage, right? And you mentioned that you just happened to stumble upon Cloudflare Workers a long time ago, a few years ago, where you just tried it out, you didn't really like the experience, and so you created MiniFlare as a result, as an answer. I mean, back then, why did you choose Cloudflare Workers opposed to, let's say, AWS Lambda or Azure Cloud Functions or whatever? Yeah, I think back then I really believed in the platform. It was just, ignoring the developer experience, it was so simple to get started. It was just write code, upload to Cloudflare, get HTTP endpoint that you can hit. And it had all of these integrated database products as well, which meant that you just didn't really have to think about much when you're writing your code, which is fantastic. They had this tool, Wrangler Dev, which worked for development, but what I really wanted was something that ran fully locally. And that's why I made MiniFlare. Yeah, interesting. I feel like Cloudflare could do a better job really advertising to the world how easy it is to work with them.
Let's go over to the questions and check out what audience members have submitted in terms of questions. The first question that I see here is you mentioned using SQLite Save Points for implementing isolated storage. How would that work? Yeah, so right now, as I mentioned in the talk, we implement isolated storage with this on-disk stack of SQLite databases. SQLite has this Save Points feature built in, which allows you to implement nested transactions. You can essentially create a Save Point and then roll back to that point in the database at a later point. So for isolated storage, what we do is we create a Save Point whenever we push it onto the stack, and then when we roll back, we'd essentially, well, we'd roll back whenever we popped on the stack. And this would mean that we wouldn't have to copy any SQLite files or do any stack operations on disk and that kind of stuff. It would all be handled by SQLite, which I guess would be on disk, but it should be a lot faster and more efficient.
SQLite, Parallel Testing, Helpers, and Keynote#
There are amazing online talks by the creator of SQLite, a highly regarded technology used in aviation computers worldwide. The speaker also addresses parallel testing in VTest, the availability of helpers in the Cloudflare test module, and the use of Keynote for creating engaging presentations.
Awesome, thanks so much. And for anyone who always neglected to look into SQLite, by the way, there's amazing, amazing talks online from the creator of SQLite. Let's just say a product with a test coverage that is so high that it's actually running in aviation computers and cockpits worldwide today, which is just astonishing. So, really, really amazing piece of technology right there. Go check it out, SQLite.
Next question is, are you planning to support running fully parallel VTest tests with the .concurrent syntax? Yeah. So we do support some concurrency at the moment. If you have a VTest workspace with lots of different projects, all of those projects will run in parallel. If you have a single project and you enable the multi-worker thing, each test file will run in its own worker, and those will run in parallel as well. The only limitation is that if you've got isolated storage enabled, we can't run multiple test files in the same process, just due to how the isolated storage works and how the storage is shared between all the running workers.
Next question is, do you provide any other helpers in the Cloudflare test module? Yeah, so as I mentioned in the talk, we support both unit and integration testing with this integration. So we provide a bunch of helpers for creating the instances of special classes that you need to pass to worker handlers for unit testing. We also provide helpers for listing the durable objects in a durable object namespace and force running these things called durable object alarms, which are kind of like persisted set timeouts. And we also provide some helpers for applying migrations to D1 databases. So that's kind of like our hosted version of SQLite in the Cloud. And we allow you to write tests against those databases and apply your migrations.
Last but not least, people are wondering how do you create these cool animations in your presentation? What did you use to create your presentation, actually? Yeah, so this is a keynote presentation. So it's included, I think, on Bournemouth versions of Mac OS. And the animations are like magic move transitions. So essentially, you create two slides, and then you say that you want to magic move, and it will automatically figure out how to transition between them. Thank you so much. I love how we learned not only about deep technology happening at CloudFair, but also actually a little bit of a mini tutorial here for using keynote in a way people at a conference take notice. Brendan, thank you so very, very much for this amazing talk and joining us today. Really, really a pleasure meeting you. And then we are gonna go to Sharon.
Table Of Contents
1. Introduction to Cloudflare Workers and Testing2. VTest Overview and Implementation3. VTest Worker Thread and Test Runner4. Custom Pool and Dynamic Code Execution5. Dynamic Code Evaluation and Module Fallback6. Request Context and Durable Objects7. Integration Tests and Durable Object Context8. Durable Objects and Developer Experience9. Isolated Per Test Storage and Seeding Data10. Testing Describe Blocks and Durable Objects11. Durable Object Request Handling12. Handling Outbound Fetch Requests13. Handling Outbound Fetch Requests - ConclusionIntroduction and Cloudflare Workers ExperienceSQLite, Parallel Testing, Helpers, and KeynoteAvailable in other languages:
Check out more articles and videos
We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career
Workshops on related topic
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop
1. Welcome to Postman- Explaining the Postman User Interface (UI)2. Workspace and Collections Collaboration- Understanding Workspaces and their role in collaboration- Exploring the concept of Collections for organizing and executing API requests3. Introduction to API Testing- Covering the basics of API testing and its significance4. Variable Management- Managing environment, global, and collection variables- Utilizing scripting snippets for dynamic data5. Building Testing Workflows- Creating effective testing workflows for comprehensive testing- Utilizing the Collection Runner for test execution- Introduction to Postbot for automated testing6. Advanced Testing- Contract Testing for ensuring API contracts- Using Mock Servers for effective testing- Maximizing productivity with Collection/Workspace templates- Integration Testing and Regression Testing strategies7. Automation with Postman- Leveraging the Postman CLI for automation- Scheduled Runs for regular testing- Integrating Postman into CI/CD pipelines8. Performance Testing- Demonstrating performance testing capabilities (showing the desktop client)- Synchronizing tests with VS Code for streamlined development9. Exploring Advanced Features - Working with Multiple Protocols: GraphQL, gRPC, and more
Join us for this workshop to unlock the full potential of Postman for API testing, streamline your testing processes, and enhance the quality and reliability of your software. Whether you're a beginner or an experienced tester, this workshop will equip you with the skills needed to excel in API testing with Postman.
Workshop level: Intermediate
We will cover writing tests, covering every application feature, structuring tests, intercepting network requests, and setting up the backend data.
Anyone who knows JavaScript programming language and has NPM installed would be able to follow along.
Comments