And this shifted responsibility from having to think about the computers and just leave it all to Amazon. And then a few years later, we started seeing the rise of platform as a service, which shifted more responsibility where you had to worry less about the computer that your application run on and just leave that to your cloud provider. And then a few years later, again, we saw the rise of functions as a service with AWS, Lambda, um, cloud. I can't think of any other examples at the moment, but it was great because we worried less about our servers. And we only focused on building the code that runs our applications. And this was great because you could, uh, scale your app almost infinitely because you can, uh, in the event, there was a surge of traffic in your application, you can scale up to say 1,000 instances of functions running at the same time.
And then, uh, in the event there was no traffic, um, it would scale down all to zero and you would only pay for exactly what you use instead of a long running server. But this came with a few challenges, especially when it came to working with relational databases. And in this talk, I'll talk about three concrete challenges that, um, well, developers and teams experienced. And the biggest one of them all was connection management and handling how you manage the connections to your database. So let's take an example of a function, which is denoted by this lovely UFO here. Um, if your function had to interact with the database, uh, and you're using a query builder or a database connector, if, and you go with the default configuration, it would likely open multiple connections. And this is okay. If you have a single Lambda that runs once in a long time, because your database would be pretty much relaxed, but the main challenge comes in when you have a surge of traffic and you have an entire swarm of lambdas.
And not one or two, three, so many four, but an entire swarm. And in this case, your database is in a state of panic because it usually has a limited number of connections at any given point in time. And before you know it, you'll be out of connections, and your data and your functions will start running into errors and they start failing. And this is not ideal because sadly, your database is unable to scale up with your functions in serverless pause. A few other problems that we still experience with serverless include function timeouts, which doesn't make it ideal for working with, say long running processes. So if you have like a batch operation that takes an hour to perform, and most cloud providers usually have like a set time of how long your function should run, which doesn't make it ideal if you have a process that runs really long. And another challenge that we still run into is cold stats where this affects the latency and of your function, which makes it doesn't give an optimal experience for your users. But it's not all that bad because we found solutions and that's great, because it just pushes innovation forward. And one of them was setting the connection pull size when connecting to your database to one. So instead of having multiple connections, you can say in this case, limit it to only one. However, this is not ideal, because if you have, for example, the batch operation that's inserting a thousand records, then there would be run sequentially instead of in parallel, which makes it a little slow. That's OK. We have another possibility, which we could define a concurrency limit. So in the event, as I mentioned, if you have a surge of traffic, your cloud provider usually sets how many lambdas that you can run at any given point in time. So in this case, you can go to your AWS console, for example, and then you can limit instead of having 100 lambdas running concurrently, you can have, say, 10 or 20 at any point in time. But the most robust solution of them all is by using an external connection pool like PG Bouncer, which will is responsible for managing all the connections that go to your database.
Comments