Or which files you want to copy, or whether you want to programmatically follow the logs of the container and so on. So you can configure everything from your application tests, from your IDE. And you don't need, you can, you can do this any number of times. So tests bring their own environment into the play. And it also integrates with various frameworks and test libraries.
For example, there is a module for Jest test containers, which simplifies working with Jest test, and test containers, where you can declaratively specify which containers you want, for example. Testcontainersnode, as the other test containers implementations, is an open source project. Christian is a true hero of the testcontainersnode implementation, the main maintainer currently. There is an npm package, which is how you get test containers into your application. And what it does, it uses docker-node to talk to the docker environment, so your docker environment doesn't need to be any particular docker implementation. It, of course, runs with Docker Desktop, but it also can run with any other compatible docker implementation. So, for example, if you're running Minikube, the lightweight Kubernetes cluster which exposes Docker API, you can use that to run your test containers-based tests. Or if you're using a remote Docker, your test container says it can talk to that. And internally at Atomic Jar, we're building the cloud solution, where you can get on-demand VM and run your test containers' tests against that. So it's a very, very flexible setup, and it works really well.
One thing that is very important here is that test containers take care of the cleanup of the containers. We know that for reliable integration tests, you need to have a repeatable environment, and for that, you want always to clean up after the run. That means if your tests pass, we clean up the containers and remove them. If your tests fail, we clean up the containers and remove them. If your machine runs with a remote Docker environment and your machine crashes, like Internet blows up, we still will clean up the containers on the remote Docker host. That means that you will never be in a situation where your test connects to the Kafka instance that you started two weeks ago and it's lingering for some reason on your beefy CI machine. And then because the issues that arise from that are really, really hard to reproduce and incredibly hard to debug and fix. So test containers, libraries try to nudge you into the right direction with test container tests to enable parallelization of tests nicely, to kind of nudge you into using the correct API, to do the cleanup at all times. And in general, it's a very, very popular approach.
Besides being just good library by itself, test containers comes with ecosystem of the modules where popular technologies have little implementation, little libraries, little modules, which specify and encode how to run that particular technology in your code. So you don't have to figure out what you need to do to run Cassandra in a Docker container or Kafka in a Docker container, but you can just use the API and specify, give me a Kafka container, give me a MongoDB container, and you will get an instance of that immediately for yourself, which is great because that allows you to concentrate on the actual business logic of your tasks without spending time on figuring out the infrastructure, because that is managed by task containers. And it's not just a node project, right? Task containers is good integration tests are required in any ecosystem of any programming language. So what you can do, you can have the similar approach in your Java application, so your .NET applications, in your Go application, there's Python, there's task containers Rust. So it's a it's a very, very popular engineering approach. And now I would like to show you a little bit how it feels to have tasks and what are the building blocks of the API that you need to know to be productive with task containers.
Comments