Effective Performance Testing to your Server with Autocannon

Performance testing expertise that is developed for a long time. In order to measure your server performance you need a tool that can efficiently simulate a lot of abilities and give you good measurements according your analysing criteria.

Autocannon NPM library gave me exactly that - that library is super easy to install and has a very simple API to work with. Within a really short amount of time you can start do performance testing to your application and get good measurements in development environment and in your performance labs, and generate complicated testing scenarios.

In this talk I will introduce Autocannon, explain how to efficiently analyse your server performance with it, and show how it helped me to understand complicated performance issues in my Node.js servers. At the end of this lecture, developers will be able to have the ability to integrate a fast and easy tool in order to measure your server performance.

Rate this content
Bookmark
Video Summary and Transcription
The video explores performance testing using AutoCannon, a tool written in Node.js that simulates HTTP requests to evaluate server load capacities. It emphasizes the importance of understanding key metrics like throughput, concurrent users, and the 99th percentile, which helps ensure a server can handle expected traffic. The video demonstrates how to install AutoCannon via npm and provides insights into using it for load testing to measure server performance. It also discusses integrating performance testing with Cypress and highlights the benefits of using HTTP pipelining and concurrent connections. By simulating real-life scenarios, developers can better understand server behavior under typical usage conditions and make necessary improvements.

This talk has been presented at TestJS Summit 2021, check out the latest edition of this JavaScript Conference.

FAQ

Tamar has been writing code for many years and has a passion for it. She has managed large development groups and worked as an architect in several places. Currently, she leads back-end development at XM Cyber.

XM Cyber is a startup that simulates hacker activities within computer networks to help organizations understand vulnerabilities and improve security.

Tamar became an expert in Node.js when she founded her own startup and wrote the entire back-end using this technology. She fell in love with Node.js and started to investigate and understand its deeper parts.

Performance testing is essential to understand how much load a server can handle. It involves measuring various metrics like throughput, concurrent users, and response times to ensure that a server can handle expected traffic.

Throughput refers to the number of requests a server can handle per second. It is a critical metric in performance testing to understand the server's capacity.

Concurrent users refer to the number of users accessing the server at the same time. This metric helps understand how the server scales under load.

The 99th percentile is crucial as it indicates that 99% of the requests were faster than a specific time. This metric is more reliable than the average response time for commitments to third parties.

AutoCannon is a tool for performance testing and benchmarking written in Node.js. It simulates sending multiple requests simultaneously, controlling concurrent users, and managing runtime to test server performance.

You can install AutoCannon via the command line using npm with the command 'npm install autocannon -g' for global installation. For use within JavaScript code, you can install it using 'npm install autocannon --save'.

HTTP pipelining allows sending multiple requests without waiting for the first one to return, whereas concurrent connections simulate multiple users accessing the server simultaneously. Both are essential for accurate performance testing.

1. Introduction to Tamar and her expertise#

Short description:

Hi, everyone. I'm Tamar, a passionate code writer with extensive experience in managing development groups and working as an architect. Currently leading back-end development at XM Cyber, a startup that simulates hacker activity. I'm an expert in Node.js and have a deep understanding of its inner workings. Follow me on Twitter for more insights and check out my previous lectures on YouTube. I'm also a professional violin player and a community leader in the JavaScript Israel community. Join our meetups if you're in Israel.

Hi, everyone. I'm very happy that you have come to my session about performance testing with Autocanon. But first, before we will really go and do some technical stuff, I would like to introduce myself. So, I am Tamar. I'm writing code for a lot of years. And it's my passion to write code. In addition to that, I was managing large development groups and I was working as an architect in several places. Currently I lead the back-end development in a startup called XM Cyber. It's a really cool startup. What we're doing is we're imitating the activity of a hacker in a computer network. In addition to that, well, I'm an expert in Node.js. And my interest in Node.js had started when I founded my own startup and I wrote my entire back-end with Node.js. At that point I really fell in love with that technology. And I started to investigate it and start to understand the deeper parts of it. And from that point I was really focusing on that technology. And it's definitely my favorite. You can follow me on Twitter and you can find previous lectures of mine in YouTube. In addition to that I have three kids. I'm a professional violin player as well. And I'm a community leader in JavaScript Israel community. We organize really cool lines of meetups. So if you happen to be around and if you happen to be around and in Israel and you encounter a meetup of JavaScript Israel then it's really nice to be there. It's recommended.

2. The Mystery of Performance Testing#

Short description:

Let's talk about the mystery of performance testing. It can be confusing due to unfamiliar terminology and measurements. Key concepts include throughput, concurrent users, the 99th percentile, and average response time. Understanding these terms is crucial for simulating servers and improving performance. The main goal of performance testing is to determine the server's load capacity. Working with one docker container helps gauge throughput, and duplicating containers increases the number of concurrent requests. The 99th percentile of response time and average throughput are essential metrics to consider.

Okay, so now let's go to the technical part of the lecture. And I would like to talk a little bit about the mystery of performance testing. Why do I call it mystery? Because let's say that at the first time that I did performance testing I felt like I was climbing on a mountain. Um, well, it was very, very hard and confusing. So why it was very hard and confusing? Because I had so many questions because everybody was talking a lot about a lot of terminology that I did not understand.

So, to which terminology I mean. Well, when you're doing performance testing you're talking about a lot of terms and a lot of measurements that you're not familiar with. And at least for me, at the beginning, left me a little bit confused. So, first of all the throughput. The throughput of the server. So how do you measure the throughput of the server? What does that mean? I mean, I can simulate a lot of scenarios in a lot of ways. So, what is the best way to actually, what is the best way to measure the throughput of the server? In addition to that, concurrent users. So, well, concurrent users, how would that affect my scale? What is it, I mean. What is a concurrent users? What is that mean? What is that measurement? How to simulate that? What is the difference between that and between HTTP pipeline? Another thing that, you know, is very common when you're talking about performance testing and talking about benchmarking, is the 99th percentile. What is the 99th percentile? Why it's very important. Because sometimes when I measure and when people measure, we're looking at the 99th percentile much more than what we're looking at the average. So why the 99th percentile is so important? And the last thing is the average response time, or the response time. So the response time, how you measure it, whether you have to look at the average or the 99th percentile, there is also the standard deviation of the benchmark that needs to be taken into account. So all of those when you first encounter them left me very confused. And I had to understand exactly what I'm doing in order to understand how to simulate my server in order for it to test mean something and really improve my performance.

So let's explain a little bit about all those terms and just a little bit in a high level to like get you in order to all of this. So first of all, of course, the main goal for performance testing is to understand how much load our server can handle. Well, usually you're working with one docker container to my opinion in performance testing and then you're like simulating HTTP request to that like one docker in order to understand what throughput this one docker can handle. And if this one container can handle 100 concurrent requests, when you duplicate it and you create another instance of it, you create another replica, then you're able to handle 200 requests, et cetera. If you create three replicas, then 300 requests. But it's really important to understand how much load one docker container actually can handle. So, important question that was needed to ask. So, what is the 99th percentile of our response time? And what is the throughput? How many concurrent requests can we handle on average? I mean, those are, like, very important questions. And why those questions are important. First of all, the 99th percentile of the response time.

3. Importance of the 99th Percentile#

Short description:

In performance testing, it's crucial to consider the 99th percentile. This metric is important because it provides a reliable measure of response time. By looking at the 99th percentile, we can confidently make commitments to third parties, knowing that the vast majority of requests are faster than a specific threshold. This ensures a high level of performance and customer satisfaction.

And in performance testing, it's really important to look at the 99th percentile. And the question is why it is important to talk and look about the 99th percentile. Why the 99th percentile is so important. So, imagine that you have a commitment for a third party, meaning that somebody's using your system and you're telling them, listen, my requests are always faster than let's say three seconds. If you would go on the average, then you know, it's not a data that you can rely on. And why it's not something that you can rely on? Because you have standard deviation. Usually, most of your instances are not around the average. You can have like instances that are far from the average. And in that case, in that case, it is better to look at the 99th percentile because that means if it's three seconds, it means that 99th percent of my requests were faster than three seconds and only one request, only 1% was slower than three seconds. So yeah, this is why. And then you're very sure to give a commitment. You feel confident in that commitment to a third party to say hey, yeah, this is something, this is something that I can rely on. My requests are faster than three seconds because my 99th percentile is three seconds. So this is why this is important.

4. Understanding Concurrent Requests and AutoCanon#

Short description:

The average concurrent requests measurement is crucial for understanding server throughput. AutoCanon is a tool that simulates requests, allowing simultaneous sending and control of concurrent users and runtime. It is written in Node.js and supports HTTP pipelining, HTTPS, and concurrent connections. HTTP pipelining sends multiple requests simultaneously without waiting for responses, while concurrent connections simulate users approaching a website. AutoCanon can be installed and used from the command line.

Another thing, and this is I think the most valuable measurement in order to understand throughput of your server is the average concurrent requests. What is the amount of concurrent requests that can be run simultaneously? Here we're looking at the average and not on like the 99th percentile because in some cases or most of the cases the 99th percentiles represent a peak because when you have a peak then your server can like have more. And the maximum concurrent requests is like your throughput during a peak. But that is also a really important measurement.

All right, so after we're speaking about all of that let's speak about AutoCanon. And how is AutoCanon is getting it into the picture. So you need to have like a tool that can simulate requests. I'm talking about sending a lot of requests simultaneously. So you need something that will help you send those requests simultaneously. You need to control the amount of concurrent users. You'd like to control the runtime. I mean you want it to, like you want the tool to be run for 15 minutes or 30 minutes for a period of time. And that is how AutoCanon is getting into the picture as a really good tool to simulate, to simulate performance testing and do benchmarking.

So, what is AutoCanon? So, AutoCanon is a tool for performance testing and a tool for benchmarking. It is written in Node.js, which is really cool, like that. It's written in the language. And it's supporting HTTP pipelining for HTTP. It's supporting HTTPS. It supports concurrent connections. But yeah, I'm talking about HTTP pipelines and I'm talking about concurrent connections. And you know what? Let's talk about what are HTTP pipelines and what are concurrent connections and what is the difference between them? So, HTTP pipelining, meaning that I'm sending... Well, the diagram is in the left side of this screen. HTTP pipeline, meaning that I'm sending three requests and I don't have to wait for the first one to return and then send the second one. But I'm sending them simultaneously. I'm sending them without waiting for the response. So, here in the left side of the picture, I'm sending three requests without waiting for response. Then on the right side, we have concurrent connections. Well, what does it mean? It means that we have a socket open from the client to server and you have requests on that socket, but that is a simulation, a good simulation of users, for example, approaching your website because if you have like thousand users approaching to your website, you have thousands of current connections. So that is what I can say. How do you install the AutoCANon itself? First of all, if you wanna install it in the command line and use it from the command line, I specifically really like to use it from the command line.

5. Installing and Using AutoCANon#

Short description:

To install AutoCANon, use NPM to install it globally with minus-g. This will give you the command line tool. If you want to use the API in your JavaScript code, install it like any other library. For Node.js, use npm install AutoCANon minus minus save. Consider installing it with savedev for development purposes.

You can do NPM install AutoCANon and then install it globally with minus-g and then you would have the command line tool of the AutoCANon. If you want to use this API from your JavaScript code, then you can do, it's like installing a library for a JavaScript project that you write. For example, for Node.js, you usually you do minus minus save in order for that library notion to be part of your package JSON. So you can definitely do npm install the AutoCANon minus minus save. By the way, to my opinion, I think that we need to install it with savedev and not with save, because it's development tool.

6. Demo of Testing Server with AutoCANon#

Short description:

Let's look at this MyServer code. It's a server code written in Express, a popular library for hosting HTTP APIs. The code exposes a route that hashes a password using a CPU-intensive and synchronous algorithm. Testing the server with Autocannon will provide results for evaluation.

But now, after we spoke about it, I would like to show you a cool demo about testing the server with AutoCANon. And then we will show the improved version of the server and try to compare some results.

All right then. Cool, so let's look at this MyServer code. Okay, so this is a server code that we have. And I hope that you're all familiar with Express. Express is a library. It's a extremely popular library for hosting and publishing HTTP APIs for your servers. The syntax is really clear. Here you're requiring Express. You're exposing it on a specific port, on my case 3000. Here you're exposing a route. This is called an Express route. I'm exposing one simple route of get with a slash. It will be an HTTP get. What this route is doing, well you have a hash function here. This is hashing a password. I gave it like, a password which is a constant here, random password as you can see. That, well when you enter here, you have this function. It was doing an algorithm in cryptography. It's generating a hash to this string that I transferred to it using a salt, a random salt. This algorithm is what's called CPU intensive, but worse than that, it's synchronous. If you're looking at here, I'm using it in a synchronous API of Node.js. There's no callback promises or anything like that here, which means that it will be executed inside the event loop itself. And it would cause like a freeze in my code.

So let's try to test the server a little bit with Autocannon and test the results. All right then. So this is the command line. And I think the instance of the server is here. Let's take it down and let's run it.

7. Testing Server Performance and Asynchronous API#

Short description:

The server is up and running on port 3000. Autocannon is used to test the server's performance with parameters C, D, and W. The latency of the run is measured, with the 99th percentile being 1.5 seconds. The average response time is around 800 milliseconds, with a standard deviation of 118 milliseconds. The server can handle an average of 12 requests per second, with a maximum of 13. The second server, implemented in Express, uses an asynchronous API and does not block the event loop.

So here it is up and running, server is up on 3,000. And then let's do Autocannon, which parameters I give it? C, it's the number of concurrent connections. D as you can see is the duration, and W is the workers. So right now I don't wanna give it workers. I would like it to work with C minus D. Let's hit enter.

Now let's count to 10. One, two, three, four, five, six, seven, eight, cool we have the results. Okay, let's see what we have here. So first of all, let's look about the latency. Here's the latency of that run, meaning the response time. So the 99th percentile is 1.5 seconds. Meaning the 99% of my requests were faster than 1.5 seconds, and 1% was slower than 1.5 seconds. As you can see, the average was close to 800 millisecond, and the standard deviation is 118 millisecond. The maximum request is around 1.5 seconds. So this is the data that we have here. Let's remember it. 1.5 is the 99th percentile. Okay.

Then we're looking at how many requests per second. So we can see that, on average, we were handling 12 requests per seconds, but at most we were handling 13 requests per second. We never handled less than 10 requests per second, so as we can see, our server, our NAS server, can handle 10 requests per second.

Now, after doing that, let's stop the server and let's see the other server that I wanted to show you. Alright, so, sorry, let's go to VS Code, I didn't want to go back to my presentation, but, yeah, you can see here, this is the second server. This is also implemented in Express, a simple Express server. Very simple and it exposed the same API, but here, we are working with the asynchronous API. As you can see, it's right here. So how do you know that it's asynchronous? Here, there's a callback that is being transferred and resolving it, meaning that this is not a synchronous operation anymore, but asynchronous and is transferred to the event loop. I'm sorry, it's transferred to the worker fed and is not blocking my event loop. So this is what we can say about that operation.

8. Analyzing Server Performance with AutoCANON#

Short description:

After running the asynchronous version of the server, the 99th percentile improved to 1.4 seconds and the average response time remained around 700 milliseconds. The server can now handle an average of 14 requests per second and up to 20 requests per second at peak load. AutoCANON is a powerful tool for analyzing server performance and making improvements.

So now, after we've seen that, let's go to the command line and well, first of all, let's find the server that we had. Sorry about that. And as we said, this is the server. Now, I'm running the asynchronous version of the server that should be more efficient. And let's look at the results now.

Okay, we're running it for 10 seconds, remember? One, two, three, four, five, six, seven, eight. All right, let's see what's going on here. First of all, the 99th percentile is 1.4 seconds, which is much better than what we have in almost 100 milliseconds, so it's better. The next thing is the average, which is close to 700 milliseconds. And if you remember, the previous average was around 700 milliseconds, which is a good. And then let's look at how many response we can handle. So the average, if you remember, was around 12 requests per second, so the average went up to 14 requests per second. And the 99th percentile is 20 requests per second, meaning that in peak we can handle 20 requests simultaneously, which is in contrast to the last round that the 97th percentile was 14 requests per second. So yeah, we can see that all of the measurements have been approved, which is good, but this is like a standard run of AutoCANON and this is how I can see the result and I can analyze them and this is a process of, well here I had a server and I knew what was the problem in advance, but you can do this process and change your server and then rerun it and run AutoCANON and look at those measurements, like basic measurements, and see whether there are improvements. All right, cool.

So now let's go back to our presentation, to my presentation, and let's continue AutoCANON. One thing that I would like to say about AutoCANON, which is pretty cool, which is, well, AutoCANON actually use worker threads. I hope that you're familiar with worker threads. It's a cool model that is implemented in, it started in Node 10, right now it's implemented in, it's become non-experimental in Node 12. And it's enabled us to run several events up in parallel. And well, this model is, well, it is used here in AutoCannon, which is really cool. So if you wanna work with several workers, you're able to do that with a flag of minus W. All right, so let's start integrating AutoCANON with our JavaScript code. First of all, the basic example. So my main goal is to like write testing tools, cool testing tools that can help me test my application. So here is a basic example where getting an instance of AutoCANON and then look what we're doing. We're just starting the run. At the end of the run, we're printing it. Here I have 10 concurrent connections. I don't want to do HTTP pipelining, meaning that it works in a way that he send a request and wait for a response before he would send another request and the duration here is 10 seconds.

9. Simulating Requests and Handling Responses#

Short description:

You can use async await to create an instance and wait for it to finish running. Autocannon has an API of client events, allowing you to handle specific responses in any way you want. Autocannon also provides the ability to send a variety of requests, such as posting a product and then posting a catalog using the generated ID. This simulates real-life scenarios.

And this is how you can simulate. Another example with async await, well most of us are working with async await and like a modern code of Node.js. Here async await, creating an instance and you're waiting for it to finish the run and then you print the results.

Another nice thing that you can do in your code with autocannon is it has an API of client events. Meaning that you can receive an event and do something with it. For example every time that you receive a specific response you can handle it in which way that you want, which can be useful. So this is another API that's really nice to explore.

And the last thing, usually when you want to do performance testing you don't want to send the same request all the time. You would like to have variety of requests. Here is where the feature of Autocannon with request, compass, and coming into an action. Here you can see in the example we have two requests. The first one is a post request. And what we're doing here is we're posting a product and then we're getting the ID at the response that the server has generated for us. And then for that product, we are posting a catalog in the second request. So in the first request, this is the flow, we're posting an ID. I'm sorry, we're posting a product, the server is generating an ID. And then on the second request, we can take the ID and post more data for it. So this is a flow that can give you variety and flow and multiple requests. And this is more close to simulate like real life scenarios. This is more close to that.

10. Tips for Testing Production Scenarios#

Short description:

Make sure the data you're testing is similar to what you have in production. Mock your production flows as much as possible. Explore AutoCanon to write your own performance tests.

All right, so we're really close to the end. So just like really small tips for testing production scenarios. First of all, a thing that at the beginning, I wasn't aware of, and when I become aware of, it has improved my performance testing very much. You have to make sure that the data that you're testing on is similar to what you have in production, meaning that if a collection in a specific size, make sure that your mock data that you're testing with it is also in the same size, that fields in production are identical to fields that happen in your database, that appear in your database, and yeah and look at your production flows and try to mock them as much as you can. That was it, I hope that my lecture gave you some knowledge about performance testing and I hope that AutoCanon you would explore it a little bit in order to try to write your own performance tests for your existing tests. So that was me, that's my Twitter handle and thank you very much for listening.

QnA

Performance Testing and AutoCanon#

Short description:

Many people have done performance testing, and it's becoming more important as we move online. Stress and load testing are crucial to understand server capacity and ensure it won't collapse under peak traffic. AutoCanon provides flexibility in writing testing scenarios and is preferred over Gatling. If your computer lacks power, consider orchestrating a swarm of computers to run AutoCanon.

All right, so I see that a lot of people have done performance testing. Even like most of you. Yeah, I think we are on a majority, slightly close, but still we are on a good path to do it. What do you expect, these results? Actually, no. I thought that like most of the people don't do it, but it's good to know that like most of the people are like getting professional in this area. It's a hard one.

Oh, definitely. And it's very important, I think, especially that we moved online with so many devices and we had so many issues with the performance all the time, right?

Yeah, also like there's a lot of types of performance testing, there's stress testing, low testing, peak testing. Yeah, I think that's a good point that you mentioned, because one of the questions I had was this, how many of these types do you think we can cover with a good amount of work, not to sacrifice and not publish anything anymore, because it's not properly tested, or which would be the most important of them to be tested for sure? From the types that you mentioned.

All of them should be tested, I mean, and by the way, in AutoCanon, you can write scripts that can help you do it like very efficiently, like it's, for example, it's very important to do like performance testing, stress testing, to understand how much load your server can handle. And also it's really important to understand, you know, if like you have a specific amount of traffic, and then, like suddenly you have a peak, it's very, very important to make sure that your server won't collapse. I mean, like in peaks, there are a lot of other like issues like autoscale, like you have to see that you take up other instances of your server quickly and that your user are getting responses fast. So I think if you're talking about prioritization, of course, like stress and load testing are to my opinion at least, are the tasks that you should start with and then you should go to peak testing.

Yeah, I agree, agree. Now, yeah, we need a bit of prioritization because sometimes we just don't have the resources, that is why.

Okay, so, Bamfa was asking, how does AutoCAD compare to Gatling regarding injection profiles? Example, Ramp-up-Authors, sorry, it's a bit hard for me. Scenarios for those who request... I'm sorry. Can you please repeat how AutoCAD compare to? Gatling, G-A-T-L-I-N-G. And the more like regarding injection profiles, scenarios or feeders.

Well, I have to say that usually I don't work with Gatlings because actually the good thing with AutoCAD is that as a developer, I can write like whatever I want, like regarding like scenarios and all the things that I, you know, well, the person who asked this question asked about like how to build like testing scenarios. So I'm just like really comfortable in like doing it in coding. And so like I actually more like the flexibility of Autocad on the thing any scenario that I want. I prefer it then guessing.

I agree. To me it's more simple and gives better results to me at least. Personal experience.

Yeah, Mark has another question. If my computer does not have enough power to channel the server, what should I do? May I orchestrate a swarm of computers to run Autocad then? He was saying that his computer doesn't have enough power to run the Autocad. If I understand correctly.

Deploying in the Cloud and Performance Testing#

Short description:

You can deploy the software being tested with Autocad in the cloud. Integrating Autocad with end-to-end tests in Cypress is possible. Running performance testing once a day is preferable to every CI, balancing quality and time invested.

To channel the server. Yeah definitely, he can deploy it in the cloud and he can build a cluster in the cloud and deploy the software he's testing with Autocad over there. Oh, good to know. Good to know.

We do have another one from Mandalore QAM and the question is, can you integrate Autocad with end-to-end test in Cypress? In servers. So let me understand them correctly. In where? Cypress. Oh yeah, sorry. Actually, I didn't try to integrate it with Cypress, to be honest, but if, well, you can do code integration everywhere. It shouldn't be hard. Yeah, I agree. And also, in this case, you did a trial, maybe they can let us know afterwards.

And maybe one last question, I think we have time. Are you integrating it into a CI or running performance testing continuously, such as once a day or every deployment? All right, so I personally am supporting the approach of running performance testing once a day, let's say, and not in every CI, because it's, I think it's a little bit slows down the development. Of course, it has the downside of, you know, not knowing on which specific commit things were broken. But I think that's faster development happened when you're running once a day. Oh, that's an interesting opinion. Cause these days, everything is pushed to be like DevOps, KOps, everything be integrated in the continuous pipeline. And maybe we should just- The thing is that like this task is long. And, you know, at least like in X and cyber, we have a huge CICD, that that's like a lot of things. And actually it's a very efficient and find a lot of bugs. But this specific task is long. Yeah, so it would mean that a developer would have to wait for a few hours. Like it's adding few hours to everything. It's long. Yeah, definitely. And you have to balance the quality and time invested. Thanks so much Tamara for joining us. Okay, great. Thanks so much for being here. And thanks for the great talk. It was nice having you around.

Tamar Twena-Stern
Tamar Twena-Stern
36 min
19 Nov, 2021

Comments

Sign in or register to post your comment.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
Network Requests with Cypress
TestJS Summit 2021TestJS Summit 2021
33 min
Network Requests with Cypress
Top Content
Cecilia Martinez, a technical account manager at Cypress, discusses network requests in Cypress and demonstrates commands like cydot request and SCI.INTERCEPT. She also explains dynamic matching and aliasing, network stubbing, and the pros and cons of using real server responses versus stubbing. The talk covers logging request responses, testing front-end and backend API, handling list length and DOM traversal, lazy loading, and provides resources for beginners to learn Cypress.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
Testing Pyramid Makes Little Sense, What We Can Use Instead
TestJS Summit 2021TestJS Summit 2021
38 min
Testing Pyramid Makes Little Sense, What We Can Use Instead
Top Content
Featured Video
Gleb Bahmutov
Roman Sandler
2 authors
The testing pyramid - the canonical shape of tests that defined what types of tests we need to write to make sure the app works - is ... obsolete. In this presentation, Roman Sandler and Gleb Bahmutov argue what the testing shape works better for today's web applications.
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Designing Effective Tests With React Testing Library
React Summit 2023React Summit 2023
151 min
Designing Effective Tests With React Testing Library
Top Content
Featured Workshop
Josh Justice
Josh Justice
React Testing Library is a great framework for React component tests because there are a lot of questions it answers for you, so you don’t need to worry about those questions. But that doesn’t mean testing is easy. There are still a lot of questions you have to figure out for yourself: How many component tests should you write vs end-to-end tests or lower-level unit tests? How can you test a certain line of code that is tricky to test? And what in the world are you supposed to do about that persistent act() warning?
In this three-hour workshop we’ll introduce React Testing Library along with a mental model for how to think about designing your component tests. This mental model will help you see how to test each bit of logic, whether or not to mock dependencies, and will help improve the design of your components. You’ll walk away with the tools, techniques, and principles you need to implement low-cost, high-value component tests.
Table of contents- The different kinds of React application tests, and where component tests fit in- A mental model for thinking about the inputs and outputs of the components you test- Options for selecting DOM elements to verify and interact with them- The value of mocks and why they shouldn’t be avoided- The challenges with asynchrony in RTL tests and how to handle them
Prerequisites- Familiarity with building applications with React- Basic experience writing automated tests with Jest or another unit testing framework- You do not need any experience with React Testing Library- Machine setup: Node LTS, Yarn
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
Featured WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
How to Start With Cypress
TestJS Summit 2022TestJS Summit 2022
146 min
How to Start With Cypress
Featured WorkshopFree
Filip Hric
Filip Hric
The web has evolved. Finally, testing has also. Cypress is a modern testing tool that answers the testing needs of modern web applications. It has been gaining a lot of traction in the last couple of years, gaining worldwide popularity. If you have been waiting to learn Cypress, wait no more! Filip Hric will guide you through the first steps on how to start using Cypress and set up a project on your own. The good news is, learning Cypress is incredibly easy. You'll write your first test in no time, and then you'll discover how to write a full end-to-end test for a modern web application. You'll learn the core concepts like retry-ability. Discover how to work and interact with your application and learn how to combine API and UI tests. Throughout this whole workshop, we will write code and do practical exercises. You will leave with a hands-on experience that you can translate to your own project.
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
WorkshopFree
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
Detox 101: How to write stable end-to-end tests for your React Native application
React Summit 2022React Summit 2022
117 min
Detox 101: How to write stable end-to-end tests for your React Native application
Top Content
WorkshopFree
Yevheniia Hlovatska
Yevheniia Hlovatska
Compared to unit testing, end-to-end testing aims to interact with your application just like a real user. And as we all know it can be pretty challenging. Especially when we talk about Mobile applications.
Tests rely on many conditions and are considered to be slow and flaky. On the other hand - end-to-end tests can give the greatest confidence that your app is working. And if done right - can become an amazing tool for boosting developer velocity.
Detox is a gray-box end-to-end testing framework for mobile apps. Developed by Wix to solve the problem of slowness and flakiness and used by React Native itself as its E2E testing tool.
Join me on this workshop to learn how to make your mobile end-to-end tests with Detox rock.
Prerequisites- iOS/Android: MacOS Catalina or newer- Android only: Linux- Install before the workshop