Performance Testing Is Hard. Can AI Help?

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

The top-selling Android phone in 2023 is 8x less powerful than high-end devices like the Pixel 8. This gap highlights a common blind spot among Android devs: we often use powerful devices and don't necessarily spot performance issues in our apps.

However, traditional performance testing is often seen as a cumbersome and time-consuming task, leading many to skip it in favor of pushing out new features.
But what if there was a way to simplify this process, making performance testing as straightforward as running a single command?

This talk presents how we can make this dream a reality: by using AI to automatically explore our app and combine it with the proper performance measuring tools, we can essentially get automated quality audits of our apps!

This talk has been presented at React Advanced 2024, check out the latest edition of this React Conference.

FAQ

Lighthouse is a tool within Google Chrome that provides performance reports for web pages. It is used to measure the performance of web apps by generating a score based on various criteria.

Flashlight is a tool developed to measure the performance of mobile apps, similar to how Lighthouse measures web apps. It allows users to upload an Android app to get a performance score, aiming to be a "mobile Lighthouse."

AI can automate the exploration of mobile apps for performance testing, reducing the manual effort required. It can perform tasks like logging in and navigating the app, while tools like Flashlight measure performance during the exploration.

End-to-end testing can be challenging due to issues like ads or cookie banners disrupting tests. Tests can be difficult to maintain and write, requiring time and effort.

AI can automate the exploration of apps, potentially reducing the effort needed to write and maintain end-to-end tests. It can adapt to elements like ads or pop-ups and continue testing without human intervention.

AI may loop or miss tasks, requiring an "AI manager" to oversee and correct its actions. The technology is still experimental and not fully reliable for all scenarios.

The non-AI aspects of Flashlight are reliable and can be integrated into a CI pipeline for performance testing. The AI features are still experimental and being developed for future inclusion.

Future developments include improving AI reliability for automated app exploration and expanding capabilities to test other aspects like accessibility, making performance testing more comprehensive and automated.

Alexandre Moureaux
Alexandre Moureaux
30 min
25 Oct, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Hi everyone, I want to share a story about my experience as a junior developer working with a client who had strict technical standards. They required high performance for web and mobile apps, but lacked a measurable way to determine performance. To address this, I created a tool called Flashlight, similar to Lighthouse, that measures mobile app performance. Flashlight provides performance scores and can run end-to-end tests to explore the app. The speaker's dream is to have an easy way to determine app performance without extensive testing. Using AI to automate the tedious task of exploring an app is the speaker's proposal. The speaker explains that the repository for the app exploration tool is open source and will be shared later. The tool requires commands to be run, such as logging in and performing a search. The AI tool helps identify performance issues, suggests using React DevTools for debugging, and recommends using FlashList instead of FlatList in React Native. The AI can continue after errors occur, and controlling the AI's completion of goals can be achieved by adding a human or another AI agent manager into the loop. AI in end-to-end testing is gaining popularity, with companies launching AI-based testing tools. The goal is to automate app exploration using AI and ensure quality checks for performance, accessibility, and security. AI can help QA engineers by automating repetitive tasks and allowing them to focus on more important aspects. The cost of performing one test case using the 4.0 Mini model is usually less than one cent. AI testing with Flashlight is still experimental, but the non-AI part is reliable. Future improvements include running on CI and easy iteration on multiple projects.

1. Introduction to Flashlight Tool

Short description:

Hi everyone, I want to share a story about my experience as a junior developer working with a client who had strict technical standards. They required high performance for web and mobile apps, but lacked a measurable way to determine performance. To address this, I created a tool called Flashlight, similar to Lighthouse, that measures mobile app performance. Flashlight provides performance scores and can run end-to-end tests to explore the app. Although we have made significant progress in mobile performance testing, there is still room for improvement, especially in the niche area of end-to-end testing.

Hi everyone, just want to quickly point out that if you're not a React Native developer, actually a lot of ideas in this talk will still apply to you, so please don't leave the room yet.

But, all right, I thought I would start with a short story. So several years ago, actually, when I was a junior developer myself with, well, probably more hair on my head, lol, I worked with a client and they had a really strong technical expertise. I thought, you know, I'm going to learn so much from that client. And they had technical standards for everything, technical requirement, quality standards, for everything on the project. And on the subject of performance, for example, so we were building a web app and a mobile app for them.

And so on web, they asked us to have 90 plus Lighthouse core on desktop for every web page. So I'm sure you're probably familiar with Lighthouse, but just in case, Lighthouse, of course, is a tool inside Google Chrome. You can go on any web page, just a few clicks and you get a performance report for any web page you want. And I think, you know, it's pretty cool. So being a young mobile app developer myself and kind of, you know, already interested about performance, I was like, I'm really curious what they're going to say about what their standard is for mobile performance. And so they gave us the requirement and they just said like, oh, the app should not lag, which, well, you know, was a bit disappointing. And I thought, OK, we don't actually have a good measurable scientific way to know if my app, if our apps have good performance or not.

And so I'm Alex, and this is how I became the app performance expert at Theodore Apps, previously known as BAM. And I built a tool called Flashlight, which intends to be a mobile Lighthouse. The joke is that Flashlight is kind of Lighthouse, a mobile Lighthouse. And you can measure performance locally or you can also go to the website, flashlight.dev, upload an Android app and you get a performance score out of it. All right. So I thought that was already pretty cool and getting really close to the Lighthouse for mobile apps dream. But there was one major caveat. By default, you would only get the performance score of your app start, which is, you know, if the first page in your app is a login page, you get the performance score for the login page and well, you know, that's not fantastic.

So I thought, OK, we can enhance that with end-to-end test. So there is a way on the website or even have a CLI to just pass an end-to-end test and Flashlight in the cloud. It actually runs on real low-end Android device. It would run those tests several times and just explore the app thanks to the end-to-end test that you wrote. And so we went as far as actually building performance regression dashboards for some of our projects, being able to detect performance regressions and everything. And so that's, well, that's a major break here. We still have lots of improvement and it feels like we've made end-to-end mobile performance testing as easy as possible. So what's the problem then? Why am I speaking to you now? What's the catch? And I think, well, the problem is probably with end-to-end, because end-to-end is still kind of niche. You talk about pyramid of tests.

2. Challenges of End-to-End Testing

Short description:

End-to-end mobile performance testing is a niche area. Although writing the tests can be simple, there are several issues. Tests can easily break, especially when dealing with elements like ads or cookie banners. Maintaining the tests and investing time in writing them can be a challenge. The speaker's dream is to have an easy way to determine app performance without extensive testing.

It's usually kind of, you know, the stuff that you don't do a lot. And so end-to-end mobile performance testing, this is really niche. And writing a test, writing a test, end-to-end test for mobile can actually be as simple as this nowadays. This is an example of a maestro test. Just in a few minutes you can really easily wrap up an end-to-end test. It's really cool. We've made a lot of progress. And yet there's several issues with that. For example, the end-to-end test can actually break easily. For example, I've worked with some projects, they had ads popping up on the home feed, for example, just randomly. And so if you have an end-to-end test for that, you need to account for the ad potentially popping up and need to close the ad. Or if you have a cookie banner or whatever. And that ends up being kind of annoying. And that ends up being kind of tricky to maintain. And also, to be honest, like my whole dream in the first place was to be like, I want to upload my app and having something telling me my app has good performance or my app doesn't have good performance. And so I'm, even if it's really simple to write the end-to-end test, you still need time to write them. So you know, it fits a few minutes, it kind of adds up, and you need to maintain them and everything. And so I'm still not really happy with where we are.

3. Automating App Exploration with AI

Short description:

Using AI to automate the tedious task of exploring an app is the speaker's proposal. Despite the risks involved, the speaker plans to give a live demo of using AI to enhance app exploration. The demo involves a self-coded app that displays cute animal pictures from Unsplash.

So if we sum up, what we got is, for example, we have an app, we're able to easily collect performance data with whatever tool. So for example, in our case, flashlights. We're able to detect issues. So for React Native apps, usually that's going to be, oh, you have low FPS or high chest thread usage. And you know, those are some issues. But to make the most of it, we need somehow to explore the apps. So we've always done it either manually, but of course that requires human effort and well, too lazy for that. Or if yeah, an end-to-end test, and I've just been saying that, eh, it's kind of annoying. And so the question is, can we do better?

So of course it's 2024 and AI is just basically everywhere, right? So by the way, who's, show of hands, like who's excited about Gen AI just happening everywhere? Wow, not a lot of people. Who's kind of skeptical about Gen AI happening everywhere? Oh, yeah. Yeah. Lots more people who like just hates Gen AI. And, well, I really thank them for being in the audience, but look at one person. Thank you for being here. Hopefully you will still like the talk. By the way, just for funsies, I asked Jadgbt to generate AI AI everywhere meme. And of course it gave me a very Dali-like picture with AI AI everywhere. Well, being everywhere. But what I thought really funny is that there is this guy in the middle who really missed the prompt and just yelled out, coffee, which I found was quite relatable. But anyway, ideally we use AI or Gen AI to automate the annoying stuff, right? The stuff we don't really want to do, the stuff that takes a lot of our time and we just want to accelerate what we've been doing. So here, of course, the annoying stuff is exploring the app. So can we do it with AI? And so this is the point where I'm showing my tremendous keynote presentation skills, because you cannot spell flashlight without AI. And so I thought I would give a live demo of what this can look like. And by the way, this is a very risky idea because live demos in general are kind of risky, but a live demo involving AI, which is something that is not totally deterministic, but it went really well in my rehearsal. So hopefully it still goes well, but we'll see. Just managing your expectations here. So I have a really stupid app that I coded myself, not really well designed, but essentially showing cute animal pictures from Unsplash. Can anyone see what I'm typing? Maybe I can actually just zoom it a bit more. All right. And so what I can do is...

4. Using AI to Explore and Measure App Performance

Short description:

The speaker explains that the repository for the app exploration tool is open source and will be shared later. The tool requires commands to be run, such as logging in and performing a search. The speaker demonstrates the tool by searching for corgis and highlighting its ability to automate exploration while collecting performance data.

So the repository, by the way, is open source. And I will share the link at the end. But at the moment, the way to run it is just run a comment and it's going to ask for a purpose. So I need to just tell it what needs to be done to explore the app.

So first, it needs to log in, right? So I will tell it to log in with my credentials, which are Ilikecorgies at gmail.com. And my password is this. There's a search feature that I just implemented. So I want to test the performance of the search feature. So I will just tell it, go to the search tab, search for... Okay, let's yell out your favorite animal. What do you want to search for? What was the this? Oh, okay. Yeah, well, everyone said corgis. So let's go with corgis. Search for corgis. Scroll down the result, for example. And like an image. And let's see what it does. So hopefully, internet connection is good.

Okay, it seems to be starting. So I did something kind of crazy. I gave AI full control over my phone. At some point, I gave it full control and started to tweet stuff. And it's like totally random stuff. It was kind of crazy. But anyway, the really cool thing about this is that while it's actually exploring, you don't really see it there. But there is a flashlight or performance measuring tool, which is actually taking performance measures. And so we will see it at the end, but we'll get a performance audit of the app while we get automated exploration. So cookie batter. This would have broken my end-to-end test. And here the AI is just like, okay, I'm just gonna tap the okay button.

5. Exploration Process and Report

Short description:

The process continues without issues, albeit a bit slow due to requests to open AI. The images are not loading properly, but the exploration is still ongoing. Despite a few hiccups, the tool provides a comprehensive report at the end, including a video of the exploration and any issues encountered, such as scrolling and responsive design.

No issues with that. So it's continuing. So far it's going well. It's a bit slow, but I mean, it's making requests to open AI. So probably that's why.

There you go. It's searching. But this is my cue to just drink some water. Ah, the images are not loading. That's kind of sad. It's crawled down like a madman, actually. Oh, there you go. Oh, really cute. Oh, wow. Oh, it shared instead of liking. Yes, but that never happens. But of course, live demos being live demos, I guess. So I'm wondering if it's going to be able... oh no, it's saying that it liked the image. Oh, that's kind of sad.

All right. This is, yeah, I don't know why it did that. But the cool thing is at the end of it, we get a cool report. This has been designed by VZero, actually by Versal, because I mean, well, we're on the subject of AI, we might as well do everything with AI. And the cool thing is, yeah, you have the video of everything that's been happening, and you see everything that went okay, like, I don't know, tap on typing the password and everything. But you also see stuff that had some issues. So for example, scrolling down on the results, I should probably zoom in. Whoops. Scrolling down on the results. Oh, my responsive design we talked about a little bit in the last talk, but it was not that great.

6. Exploring with AI and Dealing with Tricky Issues

Short description:

The AI tool helps identify performance issues, suggests using React DevTools for debugging, and recommends using FlashList instead of FlatList in React Native. OpenAI allows you to enhance the AI's capabilities by calling JS functions specified in a JSON schema. By providing goals and interacting with the app's UI elements, the AI can accomplish tasks like tapping buttons. While powerful, some issues may arise, such as the AI suggesting non-existent functions, which can be managed by adding appropriate messages.

We see, for example, that scrolling down on the results, we had some massive performance issues, the JS thread averaging, you know, lots of CPU usage. So here the tool is just telling us, oh, you should consider using React DevTools to debug. Sure. Also, since it's a scrollable view, it's just telling us, well, you know, React Native, you should probably use FlashList instead of FlatList. So all in all, we got the AI to automatically explore, kind of messed up on liking the image, but we get, at the end, automated performance audit of our app.

So going back to the slides, I had some videos just prepared in case everything was going wrong. There you go. So just want to talk about how it works. Basically, you know, you've probably used chatgbt copilot in, well, you're probably using it. Come on. But you can also use, of course, OpenAI APIs or any other model, but I've used OpenAI for this. And the hell of OpenAI is you send basically an array of messages to the API and it replies with something. What I actually didn't know about is you can get an answer telling the AI to call a JS function that you write, which is pretty cool. Basically, you pass JSON schema describing the functions you want the AI to call and you can basically enhance the AI with capabilities.

So for example, I have a function called tap, function called scroll, function called type that I implemented myself to interact with the phone, and I asked the AI, hey, explore the app, call one of those. And so in the beginning, essentially, if we have our logging screen, for example, with the two inputs filled, I print out a hierarchy of the view, which you can do quite easily on Android, could also take screenshots, but I choose to do that because it was a bit simpler. And I have the bounds for the view, for example, for the login button, I know that it's clickable. I know the accessibility label, which is login, because, well, text is login. And so I send that to the AI. I say, well, these are your goals. So what do you want to do? And so the AI just says, OK, I need to click the login button. So should just tap on those coordinates. And then we repeat until the AI has accomplished its goal. So fairly simple in itself. And just to show that it's actually really powerful and really easy to use, like the open AI APIs, you can do so much really cool stuff.

Well, it's hard to talk about AI without talking about hallucinations, though. So I'm just going to talk about some tricky issues that I had to deal with. For example, it happened to me that, you know, I was enhancing the telling the AI, you know, call either tap, scroll or type. And the AI was like, I'll call the function called input text, which, of course, does not exist. But, you know, since you're talking with an AI just like cat GPT, basically, you're managing the area of messages yourself, you can just add a message saying, you know, that doesn't exist.

7. AI Continuation and End-to-End Testing

Short description:

The AI can continue after errors occur, and controlling the AI's completion of goals can be achieved by adding a human or another AI agent manager into the loop. AI in end-to-end testing is gaining popularity, with companies launching AI-based testing tools. The goal is to automate app exploration using AI and ensure quality checks for performance, accessibility, and security. The ultimate aim is to have a website that can provide comprehensive feedback on app performance and issues by simply dragging and dropping the app.

So try again. And the AI will just, oh, yeah, sure. OK, sorry. And it can continue, which is pretty cool. So in your script, basically for the end user, it's not like they don't even see that or, you know, you could add a message like, oh, I've messed up. Sorry. This is a tricky one that happened to me quite a few times.

So I would say to the AI, OK, login, search for cats and like an image and the AI agent would login and then, OK, I'm done, without searching for cats or anything. So this is apparently an issue that happens a lot within the AI world, that it's hard to control when the AI stops and to really assess that it has accomplished their goals. So one easy way to do this is to add a human into the loop and the human would be controlling, oh, you're not done, continue. But then to fully automate stuff, you can replace the human with another AI and basically have an AI agent manager. Basically, you would make another call to open AI APIs to basically say, OK, those are all the actions that we've done. Those were the goals. Do you think the other guy is done or not? And the agent manager like the other AI can just add a message onto the loop and say, no, you still need to do that. And oh, yeah, sure. Sorry. The other guy is going to continue. Right.

But between me submitting my talk, actually, and between today, actually, even more stuff happened and actually many people have realized that AI in end-to-end tests is big. There's several companies, actually, just to name a few, and I've heard some more by talking some people that have launched some stuff doing AI in end-to-end tests because of the reasons I previously mentioned. Some of those are really, really expensive because I guess AI is where the money is, but some like app agent is actually really open source. So I want to try that out. But yeah, it's something that is happening right now a lot.

So the whole idea behind my talk is that if we can automate the app exploration with AI, and if we have quality checks with tools to check performance, accessibility, security, all the stuff that we really need to check, but we don't really want to check with tools like Flashlight or even tools with AI, like for example, accessibility with AI, that sounds like a really good stuff like, oh, check if this label is correct or not, or understandable, stuff like that. And that's basically the whole idea. And the end goal, that we're not there yet, but it's really close, is to being able to drag and drop your app and automatically have full exploration and full, you know, the website just telling you, yeah, performance is good, or no, you had an issue when I clicked there, just like I was showing on the report.

So that's it for me. Thank you a lot for your attention. You have a QR code with a link to the repository. It's a bit, you know, very experimental work in progress.

QnA

The Role of AI in Jobs and Automation

Short description:

AI is not about making people unemployed but rather helping them in their jobs. Using AI can help QA engineers by automating repetitive tasks and allowing them to focus on more important aspects. However, a human's understanding of the application being tested is still necessary. The goal is to use AI as a tool, not as a replacement for humans.

So do not, I mean, you can tell me that my code is messed up if you want it to. I know that it's a bit... You can also leave some feedback and you have slides and resources everywhere. And that's it. Thank you. Thank you. While people are rating your talk on Slido, I want to ask the first question. How many people do you want to make unemployed? Is this your goal in life?

That's a really interesting question actually, because I think the way I see, the way I look at it, is it's not about like making people unemployed. I mean, it's the whole thing about JNAI. Like, I myself sometimes find myself even kind of scared or anxious about JNAI, like even taking over my job. But I think the great way to look at AI is to really see it as something that can help people in their jobs. So if we talk about QA engineers, for example, I have a really good friend actually, who's a QA engineer and he's the one who talked to me about App Agents. And I think it's a good way to get the AI to do some annoying stuff that he doesn't need to do and to have him focusing on something else. Or even, you know, using AI to accelerate his own job and being more efficient. So hopefully no one. Yeah I can imagine it. You can do more in less time, but you still need a human to still like... You don't need to be able to write code, but you need to be able to understand the application that you're testing and to be able to say like step-by-step instructions. You need to log in, click there, go to that page, like this thing, share this thing. You still need to be able to explain that, right? Yeah, I mean, well, yes. Then again, I know that App Agents for example, they have... That was my idea in the first place, but it's a bit harder to do. Have something to do automatic exploration and you just let the AI do whatever it wants. But that's way harder to do, because the AI, it's like, you need to give clear intentions and sometimes it's gonna be like, oh, clicking on every button maybe three times and not realizing, oh, you know, I've clicked this already lots of times. So, but I mean, yeah. I agree with the whole point of like having a human in the loop to use that tool and to use AI as a tool. Not AI replacing the human, but AI being a tool is the goal here. Yeah, we're safe for the next like three years. Not yet.

AI App Testing and Cost

Short description:

I have tried to get the AI to test all the features of the app, but it tends to loop and repeat actions. To address this, another manager AI can be added to stop the looping. The cost of performing one test case using the 4.0 Mini model is usually less than one cent, making it cheaper than a QA engineer writing the test. However, the pricing strategy of charging more than a QA engineer may not be the best approach from a company's perspective. Finally, if the AI manager fails, there is a possibility of implementing an AI manager manager.

All right, audience question time. Anonymous is asking, have you attempted to get the AI to crawl the app, oh, test all its features? So that's basically already answered. Yeah, so I did and that was the demo I wanted to share originally. So what's really hard with this is the AI tends to loop a lot. So at some point it will often like, apparently it's something that happens a lot with chain AI, it will start to just do the same thing over and over. So for example, in my app here, it would click on profile picture, go to screen, like, then go back, then click on the next profile picture and everything. And apparently the way to deal with that is to add another manager into the mix and to have another AI being like, you're looping and stop it. And so kind of those interaction, but yeah.

Cool, thanks. Next question is, how many tokens did it cost to perform one test case? Oh, yeah, good question. I'm using the, what is called 4.0 Mini model, actually, which is really cheap and really efficient. So usually I think it's less than one cent per each run, kind of cheap. Yeah, so that's cheaper than a QA engineer writing the test, right? Yeah. I mean, people love that, that's like I was talking about. So a lot of people developing a product, a product like that, they're charging for like sometimes $1500 a month at least. And they're saying like, oh, it's cheaper than a QA engineer. So that's how that's like, you know, that's how they say that they can charge for more. But I don't know if this is the right way to look at that. I don't know. Now, like as a company, I would say that's too high because you need a lower barrier of entry. But from like a team perspective, it would make more sense. Yup. I would say. Right. We touched a little bit on this. But next question is, what if the AI manager fails? Doesn't have an AI manager manager? Laughing emoji, laughing emoji. That's actually a really good question, because the first time I implemented it, the AI manager was like, yep, yeah, okay, it's right. So it's logged in. No, I think so. I asked the same scenario.

AI in Prompts and Reliability

Short description:

When using AI in prompts, it can sometimes forget certain actions. To address this, the author used a bigger model and revised the prompt to break down objectives and check step by step. AI testing with Flashlight is still experimental, but the non-AI part is reliable. Competing tools for testing web apps include Lighthouse.

And what was really funny is in my prompt, like I did now, I just told it to scroll down the results and like an image. And so it did not scroll down at some point. And so it asked the manager and the manager was like, yeah, okay, you had which goals did you have? The goals were to log in, then go to the search tab, and then like an image. And it completely forgot about the scroll down, I'm guessing, because of the end. And so I did two things. I use GPT. I mean, I use 4.0 like the bigger model for the manager thinking like, oh, you know, maybe it's at least it's gonna be a bit smarter. And I'm just making one call to it. And also, I revised the prompt to, you know, it's like, if you ask the AI to go step by step into its thinking, if you've used O1, for example, it's, it's basically what O1 is doing, without you asking for it, but just like, asking it like, okay, this was the prompt, break down all the objectives and check step by step, like those down or not. But yeah, what's hard with AI is it takes a lot of sometimes trials to get things right. Like, I'm not personally used to this lack of determinism. And I'm guessing a lot of developers, of course, are so it's kind of a new science. Yeah, in my experience, the prompting is, yeah, you need to talk to it, like, it's a really smart person. But it understands input, like a five year old, basically. Yeah, maybe seven or eight now. We'll get to 20 plus later.

Next question is from Tiago. Is it reliable enough, Flashlight, to be added into a pipeline? So at the moment, well, depends. Depends on the question. I mean, if we're really talking about what I showed here about the AI, really the AI stuff, it's still kind of experimental. And that's really what we want to do next is to build or add it to a CI pipeline, and basically being able to just drag and drop and have automated exploration. At the moment, it's not reliable enough. The non AI part is totally reliable. And we have been using that for, well, a few years now, I guess, one or two years. So yeah, I'm guessing the question was about AI, but...

Oh, they're asking about competitors. Do you know any tools similar to Flashlight for testing web apps? That's a good question. I should know more, actually. Lighthouse.

Lighthouse and Controlling AI Actions

Short description:

Lighthouse allows end-to-end tests, but with its own syntax. Controlling AI actions, such as scroll distance, can help standardize testing. Future improvements include running on CI and easy iteration on multiple projects.

Lighthouse. Yeah, Lighthouse actually allows you, I think, to run end to end tests, but they have their own syntax. I think there was something called PerfTest or something like that, that allowed you also to do that. Yeah, to be honest, I don't really know. But we can probably just ask chatGPT. That's, yeah, we're there.

Nathaniel. Hey. With AI comes a lot of noise. Have you found any way to standardize AI actions? For example, the AI scroll distance in your example.

Yeah, that's a good question. So, at the moment, I give AI a lot of rules. Like, it can basically tap on any coordinates it wants. It can basically scroll any way it wants. I think it might not be the best idea, actually, and it might be easier for me to just control that. So, for example, if I want to test scrolling performance and standardize it a bit more, like say, like, oh, your view is there, so I offer you the action to scroll that view. But this is me controlling the speed and the parameters, like I'm going to scroll from 80% to 20% in 50 milliseconds or something like that. I mean, basically, this is a swipe gesture. So, at the moment, the AI has full control over that, but I'm thinking probably it would be best to change it, actually. All right, I don't know if that answered the question. Kinda. It depends. All right, next and last question. The demo looks very interesting. Are there any improvements or new features you're considering that might make it even more time efficient? So they want a sneak peek into the future, basically. Yeah. Well, I was saying, like, want to run that on CI and be being able to, like, just easily iterate on a lot of projects and lots of, yeah, test it more.

Performance and Accessibility with AI

Short description:

Performance and accessibility are essential but often overlooked. Experimenting with automatically checking labels and other quality aspects using AI could be beneficial.

But what I was a bit sharing is I think this is really, like, performance is one of those things that is really essential, but it's usually something you don't check so much, especially since, well, actually, Methat was talking about this, like, we usually have, React Native developers usually have iPhones and really top-notch iPhones. And our users sometimes have low-end Android devices, so we don't really check that. But I think there are other stuff like that. I think accessibility, for example, is really annoying to check, but it's really essential. And I think that could be one of the use cases, and I've been experimenting with that, like, automatically checking labels and asking the AI, like, oh, is this correct or not? And it feels like lots of stuff like that could be doable, like, any quality stuff that is essential, but we don't really want to focus on features, we don't really want to do. So it sounds like it, yeah, could be a good thing to add.

Cool. Well, that's all the time we have, but there are a few more questions in Slido, so if you want to ask them to Alexander directly, he's going to go to the Q&A area, so you can continue the conversation with Alexander. Alexander, please give a warm round of applause to everyone. Thank you.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
JSNation US 2024JSNation US 2024
31 min
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
Top Content
AI is transforming software engineering by using agents to help with coding. Agents can autonomously complete tasks and make decisions based on data. Collaborative AI and automation are opening new possibilities in code generation. Bolt is a powerful tool for troubleshooting, bug fixing, and authentication. Code generation tools like Copilot and Cursor provide support for selecting models and codebase awareness. Cline is a useful extension for website inspection and testing. Guidelines for coding with agents include defining requirements, choosing the right model, and frequent testing. Clear and concise instructions are crucial in AI-generated code. Experienced engineers are still necessary in understanding architecture and problem-solving. Energy consumption insights and sustainability are discussed in the Talk.
AI and Web Development: Hype or Reality
JSNation 2023JSNation 2023
24 min
AI and Web Development: Hype or Reality
Top Content
This talk explores the use of AI in web development, including tools like GitHub Copilot and Fig for CLI commands. AI can generate boilerplate code, provide context-aware solutions, and generate dummy data. It can also assist with CSS selectors and regexes, and be integrated into applications. AI is used to enhance the podcast experience by transcribing episodes and providing JSON data. The talk also discusses formatting AI output, crafting requests, and analyzing embeddings for similarity.
The Rise of the AI Engineer
React Summit US 2023React Summit US 2023
30 min
The Rise of the AI Engineer
Top Content
Watch video: The Rise of the AI Engineer
The rise of AI engineers is driven by the demand for AI and the emergence of ML research and engineering organizations. Start-ups are leveraging AI through APIs, resulting in a time-to-market advantage. The future of AI engineering holds promising results, with a focus on AI UX and the role of AI agents. Equity in AI and the central problems of AI engineering require collective efforts to address. The day-to-day life of an AI engineer involves working on products or infrastructure and dealing with specialties and tools specific to the field.
Web Apps of the Future With Web AI
JSNation 2024JSNation 2024
32 min
Web Apps of the Future With Web AI
Web AI in JavaScript allows for running machine learning models client-side in a web browser, offering advantages such as privacy, offline capabilities, low latency, and cost savings. Various AI models can be used for tasks like background blur, text toxicity detection, 3D data extraction, face mesh recognition, hand tracking, pose detection, and body segmentation. JavaScript libraries like MediaPipe LLM inference API and Visual Blocks facilitate the use of AI models. Web AI is in its early stages but has the potential to revolutionize web experiences and improve accessibility.
Code coverage with AI
TestJS Summit 2023TestJS Summit 2023
8 min
Code coverage with AI
Premium
Codium is a generative AI assistant for software development that offers code explanation, test generation, and collaboration features. It can generate tests for a GraphQL API in VS Code, improve code coverage, and even document tests. Codium allows analyzing specific code lines, generating tests based on existing ones, and answering code-related questions. It can also provide suggestions for code improvement, help with code refactoring, and assist with writing commit messages.

Workshops on related topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
AI for React Developers
React Advanced 2024React Advanced 2024
142 min
AI for React Developers
Top Content
Featured Workshop
Eve Porcello
Eve Porcello
Knowledge of AI tooling is critical for future-proofing the careers of React developers, and the Vercel suite of AI tools is an approachable on-ramp. In this course, we’ll take a closer look at the Vercel AI SDK and how this can help React developers build streaming interfaces with JavaScript and Next.js. We’ll also incorporate additional 3rd party APIs to build and deploy a music visualization app.
Topics:- Creating a React Project with Next.js- Choosing a LLM- Customizing Streaming Interfaces- Building Routes- Creating and Generating Components - Using Hooks (useChat, useCompletion, useActions, etc)
Vibe coding with Cline
JSNation 2025JSNation 2025
64 min
Vibe coding with Cline
Featured Workshop
Nik Pash
Nik Pash
The way we write code is fundamentally changing. Instead of getting stuck in nested loops and implementation details, imagine focusing purely on architecture and creative problem-solving while your AI pair programmer handles the execution. In this hands-on workshop, I'll show you how to leverage Cline (an autonomous coding agent that recently hit 1M VS Code downloads) to dramatically accelerate your development workflow through a practice we call "vibe coding" - where humans focus on high-level thinking and AI handles the implementation.You'll discover:The fundamental principles of "vibe coding" and how it differs from traditional developmentHow to architect solutions at a high level and have AI implement them accuratelyLive demo: Building a production-grade caching system in Go that saved us $500/weekTechniques for using AI to understand complex codebases in minutes instead of hoursBest practices for prompting AI agents to get exactly the code you wantCommon pitfalls to avoid when working with AI coding assistantsStrategies for using AI to accelerate learning and reduce dependency on senior engineersHow to effectively combine human creativity with AI implementation capabilitiesWhether you're a junior developer looking to accelerate your learning or a senior engineer wanting to optimize your workflow, you'll leave this workshop with practical experience in AI-assisted development that you can immediately apply to your projects. Through live coding demos and hands-on exercises, you'll learn how to leverage Cline to write better code faster while focusing on what matters - solving real problems.
Building Full Stack Apps With Cursor
JSNation 2025JSNation 2025
46 min
Building Full Stack Apps With Cursor
Featured Workshop
Mike Mikula
Mike Mikula
In this workshop I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own AI generated full stack app on your machine!
Please, find the FAQ here
Free webinar: Building Full Stack Apps With Cursor
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
71 min
Free webinar: Building Full Stack Apps With Cursor
Top Content
WorkshopFree
Mike Mikula
Mike Mikula
In this webinar I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own ai generated full stack app on your machine!
Working With OpenAI and Prompt Engineering for React Developers
React Advanced 2023React Advanced 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
Richard Moss
Richard Moss
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps