Hello, everyone. My name is Daniel Ostrovsky, and today we are going to explore AI-powered end-to-end testing. But before we dive in, go ahead and scan this barcode to follow me on Twitter, Medium, and YouTube. You can find me as nanduh81. Thank you.
And let's continue. So first of all, a little bit about myself. I'm a full-cycle developer expert. I have more than 25 years of experience. I'm a public speaker, public writer on multiple languages. I'm an open-source contributor, technical mentor, and again, scan the barcode and follow me.
So let's jump right into the challenges that we face in when we're trying to test UI with AI. Okay, now, unlike unit tests, where we can send the source code directly into a LLM, which makes it easy for the model to predict how to test should look like, UI testing is different. So for unit tests, tools like, you know, Copilot and others can generate a test in seconds because they have the full context, source code, and logic that needs to be tested. And in most cases, it works perfectly because the code and context are clear. But when it comes to UI testing, there is a huge layer of obstructions between the source code and what we're actually testing, the rendered UI.
So here's the path. We start our code with actually includes HTML, CSS, and TypeScript, right, that get transpiled into the JavaScript. Then it bundles with tools like Webpack, Write and others. It just make all the transpolation, you know, the bundle and other things. Okay. Unifications and everything. And then it goes to renders engines like React and Angular that takes over the browser. OK, then finally, everything will be rendered by browser. Now only at that point, we do get the actual visual elements we need to test. OK. So where's the source code and what actually we need to test? It's a huge difference. So if we're talking about the standard e-commerce website or blog or something similar, AI has a lot of existing knowledge to get from because it's trained on similar publicly available data. So when we send data to an LLM, 80% of the time, it can generate tests effectively because, you know, AI is already familiar with the existing patterns. OK, but what if you are working on something niche, like an internal application behind a strict authorization or a tool that only works on premise? The LLM doesn't have access to these systems, and therefore, it doesn't understand the specific flows, user interactions, and functionality.
Comments