Now let's actually have a look a closer look at how does traditional testing actually look like? So if we talk about regression testing, now regression testing will generally use whatever information that will be extracted from a build that we're creating, so whenever a build is created, we'll have certain metadata with it and we'll be able to extract information from it to just understand which particular test to run on some kind of a code change that actually has taken place.
And by analyzing all those different kinds of information that has been provided or has been extracted from the build metadata, one can actually determine which particular test to be actually run on the changed code or whatever change code has taken place, what are the particular tests that have to be done. And let's say one of the drawbacks of this particular testing suite would be that let's say if you had to make a change to one of the low level libraries, it would actually just become inefficient to just rerun all tests together. So, for example, even for smaller teams, you might be required to rerun all your test cases again and again.
So, that is where and, of course, this diagram sort of is the same that you have your source files and you have your test sets and there are certain small changes happening. Then based on those changes in a particular library or in a particular network call or a particular function call that you're making, you'll be required to rerun those test cases again and again. And that's where basically we can use machine learning to do predictive testing. That, essentially, being able to find out what is the probability that if you're trying out a particular test, how will it actually be able to find a regression within any kind of code change that we're making.
So how can we essentially make informed decisions to just rule out the test that are not going to be helpful? So essentially, being able to select those particular tests that really will matter for that particular code change because of course, not all test cases will behave or will be able to uncover issues within a particular code change. So be able to intelligently select those particular code changes that will result in actually making your testing process better and will actually make your testing process faster because we're not just using all the different test cases to a particular code change.
So essentially, how can we achieve this is that we could use a large dataset of that dataset will basically contain tests on a lot of different historical code changes, and then by applying machine learning algorithms, we'll be able to determine that which particular test are more well suited for a particular type of code changes. And that will allow us to in the future before the testing process actually takes place, or before once we have actually received the code changes, we'll be able to, before the testing process starts, we'll be able to select only those particular test cases that are more relevant for those particular types of code changes.
And this sort of diagram showcases how we actually aim to get that. So during the training process, we have all of our historic code changes, so this can be considered as the database for, you know, being able to get these historic code changes and then we are using our testing. And then, basically we'll see that whether for every test it was a failure or it was a pass. And then we'll be able to actually use it within our test selection model. And once we actually get started with the prediction, so whenever we have any kind of a new code changes, we'll basically use our test selection model to basically see which particular test might be actually useful for that particular specific code change and that will enable us to save time.
And to just specifically talk more about the training process. So, we are going to be taking a model that basically just learns and the features that are derived, right. So when we are doing the feature engineering, it's essentially getting all of those from the previous code changes and tests that have run historically. And whenever we are applying this particular system to any kind of a new change, essentially the learn model will be able to be applied to that particular code change. And the model will be able to predict what's the likelihood of detecting a regression. And based on this particular estimate, we'll be able to select those particular tests that are more likely to cover that particular code change. So that's how you can save the amount of time that it might actually take to generate the code.
This can be directly used with any kind of JavaScript based testing framework as well, including things such as Jest, we have Jasmine, right? And of course, all the other different, Mocha, right? So all of the different other testing frameworks can actually just use during the DevOps cycle when we're actually using, because this is not just actually limited to the front end of the backend coding, but essentially it comes under the umbrella of the test ops. So during the time when you're building or creating a build of your application or creating, let's say builds and you're testing out the application to be deployed to production, that's essentially where you could use the predictive testing as well. And then of course, even for JavaScript, of course we can use this kind of machine learning world-based system where you're selecting the most appropriate tests.
But even within JavaScript, we can have much better integration with different testing frameworks that I just shared. And of course, we also have a lot of different actions, for example, by PolyCode that actually help for automated generation of unit test cases. And also it is giving suggestions for unit test cases based on the code that it is able to evaluate.
Comments