Yeah. That sounds similar to actually kind of going into this next question from the audience. You may have touched on this a bit, but, you know, it states that there is an overhead involved in diagnosing CI failures, you know, for a single use case in, you know, end to end testing. With the K6 backend, you have access to the CI report and you can see failures in checks and thresholds. But knowing that you may or may not have a CI diagnosis tool like Replay or Cypress dashboard, when you have the K6 browser extensions CI diagnosis issues, how can you make the CI diagnosis aspect of that easy or easier?
That's a really great question. And I'm not 100% like closer to it yet because XK6 browser is still in a very early like beta stage. But one of the things that I know that the team wants to achieve in the future is that within K6 cloud, we have a feature called performance insights. So it can give you insights as to, which area really has the high number of like bottlenecks. So if you're utilizing, so if your CPU has experienced like a very high number of like utilization, then that performance insights will be able to tell you that maybe you can try adding some think time, adding some sleeps in your test. Cause again, we want to simulate what's happening in production as closely as possible. So we're still doing a lot of like beta tests in our K6 cloud. It's not like open yet, you know, to like the wider public, but that is something that, you know, we would have. So it can then give you some diagnosis as to like which areas have like a really like performance like bottlenecks issues. So we currently have that with the, with the existing K6 stuff already. So with the performance insights, it gives you information as to which, for example, servers have like degraded because of the high number of like load that you run. So we want to use, or we want to have the same sort of feature for XK6 browser for that. But for now it's not like fully out yet in the public.
Well, it sounds really interesting to be able to get those kinds of insights from a CI environment, you know, when is that something that we all struggle with is understand what's happening in those spaces. Another question here that we have from the audience is a question about the resulting output. So is there any way to configure alerts when certain metrics increase?
Yes, so you can use thresholds for that. So what you can do is, let's say, like one of your SLA is, you know, a specific like response time should be less than, let's say 500 milliseconds for like the 95th person cell. So in K6, we have a concept of a threshold where it's like a pass fail criteria. So if the threshold fails, then your test will be reported as a fail. And then in terms of like notifications, I think there is actually an extension that one of like our K6 contributors have written, I have to come back with the actual name, but because I've seen that there's like a list of extension in our XK6 ecosystem, that you can use it to send notifications to, I guess, like whichever, like sort of like platform you want, but yeah, the way to do it with K6 is you have to configure a threshold. And then after the test has finished running, then that specific threshold will then say if it's, you know, meet the criteria or not. Yeah.
Okay. And that probably helps with establishing your metrics too, because you assign those thresholds in advance and everyone's on the same page about what, you know, acceptable performance is. So that's always a good discussion to have. Another question here is, can you test Kafka producers and consumers with K6? I believe so, so we do have a K6 extension if you want to load tests on, you know, Kafka producers, consumers.
Comments