Video Summary and Transcription
AI testing with generative AI is revolutionizing JS testing by automating test creation and improving software test processes. Key technologies like natural language processing and neural networks, as well as quality data, play a crucial role in AI testing. The benefits of AI testing include speed, efficiency, adaptability, bug detection, and limitless potential. Generating JavaScript tests can be tailored to different tools like Selenium, and there are popular tools available for automating test automation. AI tools like Datadog, RecheckWeb, and Applitools Eyes offer powerful capabilities for anomaly detection, visual regression testing, and code list testing. The horizon for AI in testing continues to expand with evolving capabilities, and understanding AI's role in testing revolution and machine learning is crucial for practical application and continuous learning.
1. Revolutionizing JS Testing with AI
Hello everyone! My name is Rinaldi, and today I will be delivering a talk on revolutionizing JS testing with AI. AI, particularly generative AI, has been making big strides with changing the landscape of programming and testing. It has opened up opportunities for improvement within quality assurance. We will explore the growing trend of generative AI within testing, its potential to automate test creation, and how it revolutionizes the landscape. The role of machine learning in AI testing is to improve software test processes, prevent human error, and automate error detection based on history.
Hello everyone! My name is Rinaldi, and today I will be delivering a talk on revolutionizing JS testing with AI. Unmasking the future of quality assurance. So as you probably already know by now, AI, particularly generative AI, has been making big strides with changing the landscape of different kinds of programming. Javascript is just one of them. And not just the programming aspect of itself, but also the testing element as well. And that has been able to open up a lot of opportunities for improvement within quality assurance. And hence, that's where we're going to delve into this topic today.
So without further ado, let's get straight into it. So I'd just like to briefly introduce myself. So I'm a software engineer for Seek. I am also a holder of all 13 certifications from AWS. I'm also a subject matter expert for the Solutions AWS Architect Professional and AWS Data Analytics Specialist Certification. I'm an international speaker at over 30 events and conferences. I also enjoy all things AWS, open source, testing and virtual reality.
So diving into this topic directly, what is the main meat that we want to get into today? Well, really it's all about being able to understand the growing trend of generative AI within testing, because we've seen a bigger trend of how it's currently being conducted within the realm of testing. And nowadays, it's not only that you can just automate and create new text or generate new stories with generative AI, but now you can actually create code with generative AI, create tests with generative AI for your code. So it just has so much potential within what you can do with it. And as mentioned before, this leads to a lot of new areas such as codeless creation of test cases. And of course that then leads to the potential of opening test creation to anyone. So it's not only those who are very well versed within test creation that can do this, but normal devs or even non-technical people can even start looking into this and help out with the development process too of tests. So in general, it's just revolutionizing the landscape in a really big way.
What is the role of machine learning within AI testing? Well, firstly, we're using AI to be able to improve software test processes. It's becoming an assistant for us to be able to work with, to be able to create us a template to be able to build on. And aside from that, it helps us to ensure that what we are doing is right. So one of the things that is very common in test case creation is the occurrence of human error. Introducing AI to the mix, it can help us to prevent that from happening and redirect us instead to be more focused on how we can make better tests and how we can make more error proof tests. So that is the power of generative AI. And we want to be able to also automate error detection based on history. That's one of the things that it has been able to do for us too, because what we can do is that we can create an automated process where error handling and error checking is a normal thing, so that AI can immediately just check based on the history. Maybe there could be potential errors here and accordingly just provide and provide better suggestions based on that.
2. Key Technologies and Data in AI Testing
Aside from redefining quality assurance, AI testing involves key technologies like natural language processing, predictive analytics, and neural networks. The role of data is crucial as feeding quality data determines AI's performance. Fine-tuning solutions requires sufficient data.
Aside from that, it is also redefining how we are able to perform quality assurance, as mentioned before, we can also integrate it as part of our pipeline and hence build based on that to be able to ensure that the quality that we have in each stage is assured because of the checks that the AI does.
So what are the key technologies that are involved in this? To name a few, some of them include natural language processing, predictive analytics, and neural networks. Natural language processing, for example, in this particular case scenario is a very important thing because it really determines how we are processing the text that we put through. And that's why problem engineering is a very big thing within AI because we want to make sure that we are actually telling it the right instructions instead of making it vague. Well, we're going to cover it a bit later as well.
Aside from that, there is a very big role that data plays in this because feeding the AI with quality data really determines how well it's going to perform. We have seen a lot of different providers such as chat.gtbt or Amazon's bedrock models perform whether it be good or bad based on a number of parameters that they are fed based on the data that has been used to train them. So it really affects this and it's important to understand that this affects it as well. So if you, for example, decide to look into fine-tuning your solutions, that's definitely a big consideration because you want to make sure that you are fine-tuning it based on enough data and not just partial data.
3. Benefits of AI Testing and Using Generative AI
AI testing brings speed, efficiency, adaptability, bug detection, and limitless potential. It can generate tests faster and refine them based on previous tests. AI helps developers account for edge cases and uncover subtle bugs. When using generative AI, specify the language, functionality, and include what you want to be tested. Be cautious when inserting code into generative AI and use APIs for safety.
So if you, for example, decide to look into fine tuning your solutions, that's definitely a big consideration because you want to make sure that you are fine tuning it based on enough data and not just partial data.
So what's the benefits of AI testing? Firstly, it's just speed because tests can be run and invalid rapidly. There is really a limitless potential to be able to take in with the amount of tests that you can actually run with that. And you can generate them much faster than you can do manually because, although they may not be correct or as per your visualization on the spot, at least it is able to then accordingly help to refine it. So you can accordingly ask it to refine based on what the test it created before, or you can actually manually directly develop it too. So that really helps it develop much faster compared to if you're coding tests one by one.
Aside from that, it also brings efficiency. So being able to do more tests in less time would reduce human effort. You're also able to provide more coverage, as we know, JavaScript testing has coverage as one of its most important aspects. So we know that as devs, one of the most important things is that, can we account for the edge cases that are provided? And how can we ensure that it is as clean as possible from errors. And with AI, we're able to really plan for that better, and cover the potential test cases that we might not have thought of. And that's why it's good to have them as an assistant.
Fourth, we also have adaptability. We can make quick responses to changes in code and functionality. We're going to be seeing some examples of that later on, too. Finally, we also have bug detection, because they're really great at being able to uncover subtle or non obvious bugs. One of the best cases of this is how there is visual regression testing nowadays, and also in regards to how AI can compare subtle differences between imagery, for example, or within code. Really, the potential is limitless.
This is an example of how to specifically say to a generative AI, and it really depends as well on what LLM you're using. It could be ChachiBT. It could be Amazon Bedrock. This particular one, for example, I put into Amazon Bedrock's Cloud Foundational Model because I wanted to see how it performs, and it turned out that it was able to do this well. So I'm providing this as the template right now to be able to better understand how to provide this kind of prompt as a prompt to generate new test cases. So for example, we're providing it essentially instructions based on this. So we can see that we're specifying what language we're using, we're specifying what functionality we want, and we're basically just putting through test cases. Now you can put test cases, you might not want to, so it really depends, but what you really want to be able to do from this is that you want to be able to instead of generically mentioning just that generate me a suite of test cases based on this code, instead you want to be able to ensure that you're also including what you want to be tested as well. So for example, let's say you want to be able to test if this button in this webpage works, then you'll say webpage for button X, and after that in the description, you'll say, I want to test button X to see that if it actually works properly, and you can also might not be able to. And again, a pretty much like a caution I would like to make as well is be mindful as well of just inserting code randomly within generative AI because you need to make sure that you're using the API instead of a public facing interface. For example, like HatchiBT, it's usually more, it's very risky and it's very, very ill advised to insert your code or any PII directly in the interface. But meanwhile, you might have an in-house solution or you might use it through an API, in which case your data will be more likely safer.
4. Generating JavaScript Tests and Popular Tools
When generating JavaScript tests, you can specify extra tokens like input, expected output, and special considerations. The tests can be tailored to different tools, such as Selenium. Popular tools like Amazon Bedrock and Hedgey allow you to generate cases by providing code and instructions. We will also discuss other ready-made tools that automate test automation.
So that's just a quick caveat when you're considering this. Aside from that, you're just putting extra tokens like input, expected output and special considerations as well as to what constraints or conditions you want. And from this, you're able to generate JavaScript tests in an easy way. And they can also be tailored to different tools as well.
For example, if you want to test with Selenium, it can do so as well. You just have to specify in this particular case scenario. This is just a template to be able to show you how to specify and provide the LLM with the proper considerations that it needs.
So, some popular tools, a couple of popular tools include Amazon Bedrock and Hedgey, which I was mentioning before. These are the ones I've been using a lot. I've been experimenting with their APIs, I've been experimenting with how they perform. And this in general is one way of being able to generate cases. Because it is the way that you just put in your code, you put in the instructions for that, and you generate the cases based on that. But, of course, we'll be talking as well about the different other readymade tools that already help you automate your tests in an easier fashion. And here's one of them.
5. AI Tools for Testing and Monitoring
So, Datadog is a powerful tool with AI capabilities for anomaly detection and continuous monitoring. Their Bits.ai feature enables querying anomalies with general text. RecheckWeb checks for small errors in code and performs visual regression testing. Applitools eyes is a popular tool for visual regression testing. AAPower code list testing allows recording and comparing user interactions for testing purposes.
So, Datadog is one of the tools that has been used over time that has also developed with AI capabilities. So, one of its capabilities is being able to perform anomaly detection based on history. So, with that, you're able to directly obtain recommendations based on the graph that you provided, and it will use previous anomalies and recommend you to watch out for indicators and patterns based on that history. So, it's a really powerful tool to look into, and it's a really great one to definitely adopt as part of a continuous monitoring solution.
And one of the things I feel is worth mentioning as well is their Bits.ai feature, which is their new component that enables you to query these kinds of anomalies or data with general text. So, you don't even need to search for, say, go to this time period, go to this specific part of the graph, you can just say, oh, like, yeah, has an anomaly happened in this particular segment?
Aside from that, RecheckWeb is another big tool that's currently being used a lot. So, one of the things it is able to do is that it is able to check for small errors that you might have made to your code, which may affect the visualization. So, in normal cases, you might actually have a problem with visualizing broken code, but in this case scenario, it points at the broken code that you have while being able to visualize it based on the assumption that the code is still intact as well and actually not broken based on changes. So, as we can see here, for example, with button dot recommend dot slash dash cert, I accidentally put it to, instead of cert, I put it to cet, set, but it's able to detect this and because of this, it's able to then detect that particular fault and then visualize it appropriately. There's also a visual regression testing as well, and one of the biggest use cases right now is with applitools. They have applitools eyes, which is able to perform visual regression, which helps to be able to ensure that what you're currently testing on actually is comparable with what it should be. And this really helps a lot. So you can integrate the Selenium WebDriver with this. They're able to perform this, and you're able to perform functional and visual testing through the tool. So this is just an example test case that I wrote up as part of being able to work with applitools eyes. So you can see that I'm pretty much just calling eyes open, eyes checked, and finally, so you can see that this is a great way of being able to integrate with applitools eyes and appropriately use it for this case scenario. Yeah.
Aside from that, we also have AAPower code list testing. So there's a really great one that I encountered recently, and this is just an example of how I was able to do this. So we can see that with this, I'm testing my website. So I'm basically just trying to click on a button. I'm trying to navigate through from the home page through clicking, track my CPEs. And the best thing is you can just record this on screen and it can follow you around and record every step you take. So we can see it's able to track, I click track my CPEs, click the activities, click add activity. So I wanted to be able to illustrate that add activity functionality, and what it can do later is it can use this to be able to use as a test and compare against what you currently already have, the behavior in your code, to be able to ensure that it's actually working properly. So you can essentially just do this test, run it on your, for example like staging environment, and what it'll do is that it'll spin up a Chrome driver. For example, and it'll basically just test to see if it works on the Chrome driver and close appropriate to, and if it passes, then it'll say it passes. And this is an example of how it works. So I can play this example of like how it's able to set up the page, then after that this is run by itself. And this is not me, this is pretty much them doing it by their own, they can just mention this accordingly.
6. Automating Processes and Expanding Horizon for AI
And this is by test them. So it's a brief case study where I was looking to automate processes, including anomaly detection and test case generation. Implementing AI tools like Datadog resulted in enhanced quality and reduced test maintenance. The horizon for AI continues to expand with evolving capabilities, addressing future needs in automation, test case generation, and security. Best practices include prioritizing quality data, balancing AI and human insight, implementing version control, fostering AI understanding, establishing instrumental integration, and staying updated with AI evolution. Understanding AI's role in testing revolution and machine learning is crucial, along with practical application and continuous learning.
And this is by test them. So they're basically another tool that is really great to look into that I have been using as well for this pretty good case scenario. So it's a brief case study, more or less, I had a scenario as well, where I was looking to automate processes. And I had the problem as well of being able to automate anomaly detection and generate test cases better too, because one of the things that I was looking into is how can I more efficiently do what I'm currently doing, and automate these kinds of processes to be able to ensure that we don't have errors popping up here and there.
Because one of the things that I'm sure that a lot of testers are here are quite familiar with as well is, if you leave your backlog of errors or security risks alone for a while, they will definitely quickly populate if you're not really taking care of too much testing. So being able to automate is a very important thing. So as part of the implementation, integrate AI tools such as Datadog for anomaly detection and also test them into as part of the workflow. And what I then was able to get was enhanced quality and being able to have reduced time needed for test maintenance.
So there's definitely going to be a continued expanding horizon for AI. For tomorrow, it's definitely going to be more potential growth areas with evolving capabilities as well. It's going to keep on developing as we see it right now. We're going to continue on being able to anticipate the needs of future JavaScript applications because there's going to be a lot. And with the development of AI, it's going to just continue on proving the potential of it to be able to help address future needs, including automation and test case generation and security. So there's just a lot that it's able to do. And of course, automate, automate, automate, because if you can, why not, makes your life easier as a tester and as a developer.
So some best practices and patterns. Prioritize quality data, balance AI human insight, implement version control, foster AI understanding, establish instrumental integration, keep abreast with AI evolution, test data management, continuous monitoring analysis. And some key takeaways are, it's important to understand AI is pretty much a testing revolution. We also have machine learning as a pivotal role, too, to be able to understand. Understand how you can integrate tools appropriately, as I mentioned before, with the four examples that I provided before, with excluding ChatGPTN Amazon Bedrock, as you can also definitely use them as tools as well. Practical application is very important to understand, understand the future outlook of how it looks in the future, and understand that it's very important to look at it as empowerment through knowledge, because it's continuous learning process. And that is all for me. Thank you again so much, everyone, for listening to this session. And I'm happy to take any questions after the session as well.
Comments