So, essentially, this is a really fast web server, web framework for Node.js. You can build small and big apps with this, and it works really well, both for monolith, but also for microservices.
Now, why would you need microservices? Because you need to scale teams. Microservices are a clear way of scaling teams so that you can have different teams to maintain different parts of your system so that they don't overstep on top of each other. It's actually great.
However, one of the problems of microservices system is their chattiness. In fact, all the microservices chat a lot between each other, and you need to call data that is managed by some other microservices. So you have actually a lot of communication between the various microservices. From time to time, somebody will call this a microservices mesh. And what we are going to focus on most of the time in this presentation is this link between microservices. And I've been researching this problem for three, four, five years, something like that. So it has been brewing for some time in my head.
You can have an HTTP server with everything, everyone can work, but even the most basic ones. So let's consider a very simple server that you just do a little bit of a timeout of one millisecond. Very simple. This to simulate a very fast database that always replies us with hello world in one millisecond. Hey, it's great. And an HTTP client, the Node Core HTTP. Why we're just focusing on Node Core HTTP? Well, because Axios, Node fetch, request got, they all use this as their backing. It's great. And so every single time you're doing those things we are doing, you know, they create a TCP socket.
So essentially the sender open up, when they open up a TCP socket they need to do a little bit of a dance. This is typically one round full roundtrip to get to establish this, which is, you know, quite a lot, OK? Because, you know, depending on the latency, the distance, physical distance between the two, it can even take some time. It can be 10, 20 milliseconds, something like that. So we're talking very little numbers. But remember, you have maybe 200 milliseconds to respond to your client or maybe 400, whatever you want. But, you know, the more hopes you do, the higher your latency gets. So you don't really want to spend time because you haven't... So once the 3NShake has established is not even finished, like you haven't transferred any data, right? You just created the socket. Consider that if you're using TLS or SSL and so on, it takes even longer. So, but that's not just the case, because once you create a TCP socket, in fact, it's, you know, there's a concept that is called the congestion window, which is considered slow at the beginning for new established sockets.
Comments