Effective Performance Testing to your Server with Autocannon
From Author:
Performance testing expertise that is developed for a long time. In order to measure your server performance you need a tool that can efficiently simulate a lot of abilities and give you good measurements according your analysing criteria.
Autocannon NPM library gave me exactly that - that library is super easy to install and has a very simple API to work with. Within a really short amount of time you can start do performance testing to your application and get good measurements in development environment and in your performance labs, and generate complicated testing scenarios.
In this talk I will introduce Autocannon, explain how to efficiently analyse your server performance with it, and show how it helped me to understand complicated performance issues in my Node.js servers. At the end of this lecture, developers will be able to have the ability to integrate a fast and easy tool in order to measure your server performance.
This talk has been presented at TestJS Summit 2021, check out the latest edition of this JavaScript Conference.
FAQ
Tamar has been writing code for many years and has a passion for it. She has managed large development groups and worked as an architect in several places. Currently, she leads back-end development at XM Cyber.
XM Cyber is a startup that simulates hacker activities within computer networks to help organizations understand vulnerabilities and improve security.
Tamar became an expert in Node.js when she founded her own startup and wrote the entire back-end using this technology. She fell in love with Node.js and started to investigate and understand its deeper parts.
Performance testing is essential to understand how much load a server can handle. It involves measuring various metrics like throughput, concurrent users, and response times to ensure that a server can handle expected traffic.
Throughput refers to the number of requests a server can handle per second. It is a critical metric in performance testing to understand the server's capacity.
Concurrent users refer to the number of users accessing the server at the same time. This metric helps understand how the server scales under load.
The 99th percentile is crucial as it indicates that 99% of the requests were faster than a specific time. This metric is more reliable than the average response time for commitments to third parties.
AutoCannon is a tool for performance testing and benchmarking written in Node.js. It simulates sending multiple requests simultaneously, controlling concurrent users, and managing runtime to test server performance.
You can install AutoCannon via the command line using npm with the command 'npm install autocannon -g' for global installation. For use within JavaScript code, you can install it using 'npm install autocannon --save'.
HTTP pipelining allows sending multiple requests without waiting for the first one to return, whereas concurrent connections simulate multiple users accessing the server simultaneously. Both are essential for accurate performance testing.
Video Transcription
1. Introduction to Tamar and her expertise
Hi, everyone. I'm Tamar, a passionate code writer with extensive experience in managing development groups and working as an architect. Currently leading back-end development at XM Cyber, a startup that simulates hacker activity. I'm an expert in Node.js and have a deep understanding of its inner workings. Follow me on Twitter for more insights and check out my previous lectures on YouTube. I'm also a professional violin player and a community leader in the JavaScript Israel community. Join our meetups if you're in Israel.
Hi, everyone. I'm very happy that you have come to my session about performance testing with Autocanon. But first, before we will really go and do some technical stuff, I would like to introduce myself. So, I am Tamar. I'm writing code for a lot of years. And it's my passion to write code. In addition to that, I was managing large development groups and I was working as an architect in several places. Currently I lead the back-end development in a startup called XM Cyber. It's a really cool startup. What we're doing is we're imitating the activity of a hacker in a computer network. In addition to that, well, I'm an expert in Node.js. And my interest in Node.js had started when I founded my own startup and I wrote my entire back-end with Node.js. At that point I really fell in love with that technology. And I started to investigate it and start to understand the deeper parts of it. And from that point I was really focusing on that technology. And it's definitely my favorite. You can follow me on Twitter and you can find previous lectures of mine in YouTube. In addition to that I have three kids. I'm a professional violin player as well. And I'm a community leader in JavaScript Israel community. We organize really cool lines of meetups. So if you happen to be around and if you happen to be around and in Israel and you encounter a meetup of JavaScript Israel then it's really nice to be there. It's recommended.
2. The Mystery of Performance Testing
Let's talk about the mystery of performance testing. It can be confusing due to unfamiliar terminology and measurements. Key concepts include throughput, concurrent users, the 99th percentile, and average response time. Understanding these terms is crucial for simulating servers and improving performance. The main goal of performance testing is to determine the server's load capacity. Working with one docker container helps gauge throughput, and duplicating containers increases the number of concurrent requests. The 99th percentile of response time and average throughput are essential metrics to consider.
Okay, so now let's go to the technical part of the lecture. And I would like to talk a little bit about the mystery of performance testing. Why do I call it mystery? Because let's say that at the first time that I did performance testing I felt like I was climbing on a mountain. Um, well, it was very, very hard and confusing. So why it was very hard and confusing? Because I had so many questions because everybody was talking a lot about a lot of terminology that I did not understand.
So, to which terminology I mean. Well, when you're doing performance testing you're talking about a lot of terms and a lot of measurements that you're not familiar with. And at least for me, at the beginning, left me a little bit confused. So, first of all the throughput. The throughput of the server. So how do you measure the throughput of the server? What does that mean? I mean, I can simulate a lot of scenarios in a lot of ways. So, what is the best way to actually, what is the best way to measure the throughput of the server? In addition to that, concurrent users. So, well, concurrent users, how would that affect my scale? What is it, I mean. What is a concurrent users? What is that mean? What is that measurement? How to simulate that? What is the difference between that and between HTTP pipeline? Another thing that, you know, is very common when you're talking about performance testing and talking about benchmarking, is the 99th percentile. What is the 99th percentile? Why it's very important. Because sometimes when I measure and when people measure, we're looking at the 99th percentile much more than what we're looking at the average. So why the 99th percentile is so important? And the last thing is the average response time, or the response time. So the response time, how you measure it, whether you have to look at the average or the 99th percentile, there is also the standard deviation of the benchmark that needs to be taken into account. So all of those when you first encounter them left me very confused. And I had to understand exactly what I'm doing in order to understand how to simulate my server in order for it to test mean something and really improve my performance.
So let's explain a little bit about all those terms and just a little bit in a high level to like get you in order to all of this. So first of all, of course, the main goal for performance testing is to understand how much load our server can handle. Well, usually you're working with one docker container to my opinion in performance testing and then you're like simulating HTTP request to that like one docker in order to understand what throughput this one docker can handle. And if this one container can handle 100 concurrent requests, when you duplicate it and you create another instance of it, you create another replica, then you're able to handle 200 requests, et cetera. If you create three replicas, then 300 requests. But it's really important to understand how much load one docker container actually can handle. So, important question that was needed to ask. So, what is the 99th percentile of our response time? And what is the throughput? How many concurrent requests can we handle on average? I mean, those are, like, very important questions. And why those questions are important. First of all, the 99th percentile of the response time.
QnA
Check out more articles and videos
We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career
Workshops on related topic
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop
Comments