When you're using Node.js and still using the AWS SDK version 2, then you need to set this environment variable on all of your Lambda functions to enable HTTP keep alive, which is going to save you about 10 to 20 milliseconds on every single request you make from your Lambda function to other AWS services.
And if you're using Lambda with RDS, then you want to use the database proxies to manage the connection pooling. Otherwise, you'll likely to run into problems, you're having too many connections to the RDS cluster.
And as far as the code starts are concerned, having smaller deployment artifacts also means a faster co-start time. But unfortunately, adding more memory doesn't actually help reduce the co-start time because during co-start your Lambda function already runs the initialization code at full CPU. So this is true for, as far as I know, all language run times except for Java, because for Java part of the initialization happens during the first invocation after the initialization.
So I think that's why when it comes to Java, the extra CPU resources you get from having highest memory setting can also help in terms of reducing co-start time. But that's not a case for any of the other language run times I've tested.
The thing that's going to help you more in terms of reducing co-start time is actually just trimming your dependencies, having fewer dependencies, that means a smaller deployment artifact, but also less time for the runtime to initialize your functions as well.
And one thing that I found really useful is to bundle my dependencies into a lambda layer so that I don't have to upload them every single time when they haven't changed between deployments. And it's a great way for you to optimize the deployment time, to help you reduce both the co-start time, but also the deployment time as well.
However, lambda layers is not a good substitute for general-purpose package managers like npm or Maven, and you shouldn't use them as the primary way that you share code between projects, because for starters, there's a lot more steps to make it work. With something like npm, publishing a new package and storing them is just really straightforward, and there's support for scanning your dependencies against known vulnerabilities whereas publishing a new version of a lambda layer and then bringing that new version into a project takes a lot more work, and you don't have semantic versioning or the other tooling you get with npm as well.
My preferred way to use the lambda layers is use this plugin with the server framework, which packages your dependencies into a lambda layer during deployment, uploads it to S3, and then updates all of your functions to reference this layer. And the great thing about this plugin is that it detects when your dependencies has changed. So if they haven't changed on a subsequent deployment, it doesn't have to publish a new layer version, and your deployment is much faster as a result.
And then the trick for reducing COSTAR time is to not reference the full AWS SDK. So if you just need to use one client, then this also helps reduce the time it's going to take for the node runtime to initialize your function module, and therefore makes your COSTAR to go faster.
And if you're using Lambda to process the event asynchronously, like with SNS, S3, or EventBridge, then you also need to configure some sort of DLQ or DeltaQ to capture any failure invocation events so that they're not lost. Nowadays, you should use the Lambda destinations instead of Lambda of the DeltaQs, because the DeltaQs only captures an invocation payload and not the error. So you have to go back to your logs or whatever to figure out why you failed in the first place so that you can decide whether or not the event can be reprocessed now. But with Lambda destinations, it captures both the invocation payload, as well as the context around the invocation and the response, which in the case of a failure, it captures the error from the last attempt. So you don't have to go fish for them yourself.
And that brings me to the end of my presentation. As I mentioned before, I spend most of my time as an independent consultant, and if we want to have a chat and see how Serverless can help you and how we might be to work together, then go to theburninmonk.com to see how we might work together to help you succeed with Serverless. And if you want to learn some of the tips and tricks I've learned over the time, and I'm also running a workshop in March, and you can get the 15% with the code yenprs15. And with that, thank you guys very much for your attention. And if you've got any questions, feel free to let me know, and we can discuss it now. Oh, wow, 27% of our audience has been using Terraform as their deployment framework, and on second, we have 23% for Serverless, 18% for CloudFormation, 14% for CDK, another 40 for something else, and 5% for SA. How do you feel about this, Jan-Rust? Is this what you were expecting? Not at all, actually.
Comments