Let's say we have way more than 64 kilobytes chunk, so it's really the data that flows goes faster. So we have a bit of class of fast producer to send a readable. I use the read method and I made the consumer really slow. So here we have the slower consumer. Here I want to again show you the memory usage of that. So when I run it manually, sorry, we have run manually and when I run in the bed way, so this is the bed way when we don't handle backpressure. So after I run it, you will see how much memory it takes and how it's not efficient because if we not handling backpressure, so we're kind of canceling all the effect that strings had. But if we run it on the manual mode, which do handle the backpressure, you can see how the memory stays flat and low. And this is pretty amazing because we're talking about the same file, the same amount of memory and data. But look what a backpressure can do to our app if we handle it correctly, instead of using all this memory.
And if you want to take your apps from fine and maybe work, maybe not, and unpredictable, it's okay maybe for small apps. But if you want to take it to the next level, which what was happening to me, look at the main difference. And if we're talking about efficiency, this is just a game changer, at least for me. So let's go through one more example. As I said, if I have a CSV file, all right, so here is the naive way when I try to find, let's say I have a CSV file with a lot of ID and users. And I want to find, let's say, ID number 10, because it's New York and we want to respect the money from the giants, or for the Jetfans. We want to say, let's say, the user ID of 10. So if we have millions of users in the naive way, we need to load the whole file while with the streams, we can stop after we find what we needed.
So let's say we have a small CSV file of, I don't know, like five megabytes, I think. And we run this example. You can see how the process of streams stopped after 12 lines with nine millisecond running. In the naive way, it's a total of 2,000 lines into memory, but 15 milliseconds, not so bad. But when the file path is a much larger file, I think here we have a 15 megabyte file, I think. And we save and run it. You can see now, this was taking 13 milliseconds. And here, 77. And you can see how the memory goes and grows in the naive way, and how it keeps stable, because after we found what we needed, we can access it from the code. And if we do one gigabyte file, and the amazing part, that our app crashed because we used more memory than what we actually had. So this is also a very useful way of how to do it. And the last thing I want to show you is the difference between pipe and pipeline.
Comments