But if the memory consumption increase, you have more garbage collection to do, which runs on the CPU. And so when the CPU this seed runs, the GC grows, the memory grows, the GC grows, grows in usage and then the CPU gets filled in by the garbage collection activity to do. Now, that's the problem, you see, because latency and throughput are deeply connected. So if I have a slow operation in my code, then it will increase concurrency, which will increase memory pressure, which will slow down the CPU. And it's possible that because of that slowdown in CPU, the buildup of events to process will go up again and enter some sort of catastrophic cycle where until the pressure is relieved somewhere, everything will be blocked.
So, you know, the trick here is to make sure that most of the process is done as fast as it's possibly can. That's it, that's the answer to this. And it does not schedule any more work. Now, these means in the context of logging, for example, that if you are having, if you want to send the same log line to multiple destination, it will be problematic because we have, or especially even over a network, because if we try to do that, that data will stay alive for longer and we are creating actually more work for our event loop. In Pino, we do all the data processing synchronously. So whenever you call .info whatever, you can be certain that by using the default, that all the processing is as done within the macro tick. So there's nothing scheduled to be executed later. This is phenomenal and really important for getting good throughput, because we make sure that there is no memory located left and it gets essentially cycled very easily. Now you could turn on the async mode in Pino so that the logging is then flashed after a bit, and it's not written immediately, but it's a little bit tougher on the debugging side of things. We'll talk about that in a second.
In the original presentation about Pino from 2016, we had this light about the fact that Pino is more than a module, it's a way of life. Well, Pino, it is more than a module. And at the beginning, we flagged this because it was clear that we were proposing something radically different. And what were we proposing in 2016? Well, we were telling people, well, look, you need to implement your transport as separate processes or rely on your infra. So you log on standard output and your infra picks standard output and send it somewhere else. Yes, we still recommend that, that has not changed. However, several teams have reached out to us and us as well, but we really need to execute the distribution in our process. The typical target is sending your log to data dog or elastic search or something like that. Well, Pino allows you to do that. Well, Pino will soon allow you to do that from the same process without blocking the main thread. We'll see that in a moment. You know, what happened was that all people started asking us for the same features. They wanted to format their logs, they want to send it to remote destination. They wanted to implement log rotation, this was a big thing. And really they wanted to send multiple logs to multiple places at the same time.
Comments