Video Summary and Transcription
Today's Talk discusses how Low Code enables continuous testing and DevOps, emphasizing the importance of test automation and the drawbacks of siloed approaches. The next era of quality engineering aims to overcome automation challenges by incorporating machine learning and intelligent automation. The development process involves local testing, pull requests, and comprehensive testing to ensure quality before merging. Low-code tools like Mable help democratize testing and achieve higher test coverage. Mable's coverage report includes performance metrics and test results, making testing easy and accessible for any team member.
1. Introduction to Low Code and DevOps
Today we'll be talking about how Low Code enables continuous testing and DevOps. Test automation is central to succeeding in DevOps. Siloed approaches don't work. Low-code automation gets rid of silos and produces better results in quality engineering efforts.
Hi, everyone. Welcome. Today we'll be talking about how Low Code enables continuous testing and DevOps. It is so great to be here with you all today. My name is Juliet McVail, and I'm a Product Manager at Navel, which is an intelligent low-code test automation solution. I've been with Navel for about 2 1⁄2 years now, and I'm currently the Product Manager for our browser and API testing team. And that really focuses on test creation and execution across both browser and API tests.
So in this talk, I'm going to be focusing on three key points. The first is that test automation is really central to any effort in order to succeed in DevOps. The second is an assertion that siloed approaches to test automation don't work. And finally, that low-code automation enables us to get rid of these silos and produce better results in our quality engineering efforts. So let's dive in.
2. Quality Engineering and Test Automation
Quality engineering is an enabler for key trends in software development. Test automation is crucial for deploying changes with confidence. However, very few teams have achieved the necessary level of automation. Without it, there is a risk of bottleneck and limited capacity to verify changes. The next era aims to overcome these challenges by building intelligence into the automation process.
So first and foremost, there's really such an exciting time to be someone who's focused on quality engineering because there are so many critical trends in the industry. And we're also realizing that quality engineering plays a critical role in this as well for enabling innovation. And whether you're looking to broaden your adoption of agile or most DevOps, perhaps your team wants to migrate to the cloud or shift left. Quality engineering is really ultimately an enabler for all of these key trends.
And effectively, what we're trying to do in software today is accelerate the pace of innovation with quality. We really want high velocity and throughput in these pipelines, and we want to be able to create and deploy changes constantly. So whether that's through code or configuration or upgrading, or even dealing with a change that's happening with your integrated partners, because you likely consume a lot of services via API from third parties. And so we want to be able to embrace that change with velocity and throughput. And now that also needs to be under the watchful eye of a system that can ensure quality. And that's really where test automation comes into play. And we know that we're successful in implementing test automation when we have a high-level automation, but we also have high confidence in our ability to deploy changes with good quality.
So this is really an interesting point. And it's also the problem that we saw in last year's DevOps report. We know that we have a low level of, when we have a low level of test automation, we also have relatively little confidence in being able to deploy changes. And as that level of testing and deployment automation increases, we can see that confidence increases as well. And this is really key as you move towards continuous testing model. Because despite the fact that we know we want to get to this high level of automation, so we're confident in dealing with all of that change, very few teams have realized the level of automation necessary to deploy with confidence. And we have work to do to get there. And really, the risk here is that if we don't, is that we don't realize this vision of those high-velocity, high-quality pipelines. Because what we have to do here is slow down the throughput in order to manage quality. And that means we have limited capacity that would be able to verify those changes from a QA perspective. And so this is the risk that we end up having this bottleneck, despite the fact that there's been so much innovation on this side. We're not actually able to have that throughput. And so this next era really tries to overcome that. And this all starts with the assertion that if you want to automate a process, you have to build intelligence into it. And we know this intuitively and places outside of test automation. So for example, if I wanted to build a self-driving car, I wouldn't just say, OK, I'm going to build the engine, give it a set of instructions, and then just go have a drive, which is effectively what we've done historically with test automation. You would really recognize intuitively that the car has to have a lot of sensors and a lot of data. We need to use a GPS. And then you have to actually be able to read what's happening in real time when you're out on the road.
3. Intelligent Automation and Organizational Rollout
Building machine learning models and plugging them into the control plane enables potential in automation. Low-code is a key tenet of quality engineering, allowing more people to participate. Separating intent from implementation allows for intelligent automation. Auto healing updates tests automatically based on learned information. Maple enables importing existing Selenium tests and incorporating intelligence into execution. Organizational rollout and fitting the solutions in the right places and roles are crucial for success.
And then we have to build machine learning and other models in place in order to make intelligent decisions. And then once you can do all of that, you can plug that brain into the control plane that actually automates the driving, and then we have some potential there.
And effectively with test automation, that's where we're going now, right? And we're saying, look, it's not just about the drivers that can move a browser, move a mobile app or interact with an API. Once you understand the intent of automation, you have to collect the data, analyze the data and make good decisions in order for that automation to be effective. And so low-code here is really a key tenet of quality engineering, and if we don't focus on low-code, then we're likely to limit the number of people and the roles you can actually participate in quality on our teams.
So when I talk about intent, we're really referring to testing the things that you would look for if you were manually testing, and we can separate that out from automation. So when the intent is actually manifested in the test itself, automation can then drive the execution of it. And we actually let Mable handle this part. So when we allow teams to focus on the intent and the functionality that they want to share, we present them with a low-code interface, and then we let the system handle that implementation. And what that means is that not only developers but manual testers, product owners, support people, and others can all participate in quality, and we don't end up in silos.
So another aspect of this is that once you separate the intent from the implementation, is that we can build a system that can be very intelligent. So as an example, let's say that the intent for a test is to submit a form, and there is a submit button on that form. Perhaps my team is looking to make some changes, and they end up changing the ID of that submit button. With more traditional test automation solutions, a test is going to fail because it relied on the ID in order to locate the button. But in this new era, since we're collecting so much information as we're running the tests, the system knows that even though the ID changed, the button is still there, and we can actually locate it using numerous different techniques and attributes. The system will attempt to locate that button and proceed with the test. And when we can correctly identify that an element has changed, we'll actually update the test automatically based on the information we learned. And that's what we call auto healing. So we're able to accomplish this by separating intent from implementation, letting the system actually handle that implementation, and enabling people to express intent with as little code as possible.
The other important thing to note here is that so many teams have spent engineering years building out sophisticated script-based test automation frameworks. And we don't want to lose that work. And with Maple, you can import any existing Selenium tests that your team may have and export those tests to Selenium IDE. So this really allows you to avoid vendor login but also leverage the hard work your team has done and incorporate intelligence and machine learning into the execution of these tests. And so that's really the technology side of low code plus intelligence. And that gives us the capability to solve a lot of the problems that we've seen with test automation in the past. However, half of the technology by itself is really insufficient. Because where we're going to see the benefits of this is when we can roll out these solutions organizationally. And fit them in the right places with the right roles. And if we don't, we actually end up seeing a lot of failures with these initiatives, despite the fact that the technology is there. So, as you've seen in DevOps, the vision here is really to build intelligent automation in the very beginning to the very end of the pipeline.
4. Development Process and Pull Requests
Today we're going to talk through each of the different stages of the development process. Let's start with the code base on a developer's machine. We want to have working changes and core end-to-end test coverage. Developers should execute initial tests locally using tools like Navel or Jest. Quality engineers can also test the changes locally. Creating a branch for related tests in Navel ensures readiness. It's important to assess the remaining testing work and consider adding additional tests. The pivotal point is the pull request, where changes are proposed for merging into the main branch. Sufficient testing is crucial before deploying or merging.
And that starts when you're working on a local branch for a feature or a change. And that really goes all the way through the production and that change actually reaches production. So, today we're going to talk through each of these different stages. And I'll provide you with a couple of examples of what I mean when I say it's really about figuring out who will do what at that stage.
So, let's start out with the code base that's local to a developer's machine. The goal of the stage is that we want to have working changes. So, perhaps I create a new feature that you want to validate before actually merging that to main. In this stage, we also want to make sure we have core end-to-end test coverage in place. So, for this feature, it may not test all of the scenarios, but we're exercising that happy path. And then from a quality perspective, in this stage, we also want to have a plan. So, we want to know that – what testing we need to do in the future, what coverage we already have, what the risk is, and so forth, so we actually can enable our team to make those changes.
And so, just to give you a couple of examples, in this stage, what we believe is that the developers who are creating those changes should be executing some set of initial tests end-to-end locally. And so, if you're using Navel, you can actually use the Navel command-line interface, and if your team's already using Jest, you can actually automate this via the command-line interface to happen on every commit. And that means that if you have a set of related tests, you can run them in an unobtrusive way. And we have a couple of tabs in the background to let you know whether your changes are breaking any existing tests, and if those tests are finding defects. But while you're still working locally, the goal would be to know whether or not you're breaking any of the core tasks in your application, and to have those tests help you find out whether you're introducing regressions. So, this process can really happen seamlessly by running those tests in the command-line automatically. And what's really important to note is that this isn't just limited to developers on your team. If you have a quality engineer who's paired with a developer, that developer can be pushing their branch right back to GitHub, and the quality engineer can be pulling that branch down, running it locally, and beginning to get more comfortable with those changes. And next, the other thing we can be doing during this phase is that while you have your code branch, you can also create a branch for that set of tests in Navel and test those changes that will be related to the code branch. So, you can see in this specific example, I have my branch, I have some tests that I've created or modified so that we're all ready to go when we reach this next step. And before we do that, we'll also want to know how much work is remaining from a testing perspective around this feature. So, for example, if I was working on a Workspace feature, I'd want to know what other tests are related to this feature already. Am I likely going to need to change any of those tests in order to have adequate test coverage? And in the next stage, am I going to want to add any additional tests here? And with a tool like Mable, we actually already have a coverage feature where you can search basically by page and see what tests you have related to that page. Are those good tests? Do they have enough assertions? Are they effectively validating the functionality of that page? And so you can use that feature when we're in the coding stage to get a sense of what changes you might want to make as we move forward.
So let's move on to what I believe is the most pivotal point in the process for a DevOps-oriented team. And that's the moment of a pull request. Meaning I have a set of changes, both from a testing perspective and a code perspective, that I'm proposing we merge into our main branch. And we have some goals here as well. So the first one is, I don't want to deploy or merge something right away without sufficient testing.
5. Merging and Deployment Stage
The first goal is to avoid merging anything that would stop the pipeline. Effective end-to-end test coverage and long-term team success are also important. Using a low-code framework like Mable allows anyone to participate in test logic review. Specialized knowledge is crucial for reusability, set-up, tear-down, and test coverage best practices. Collaboration includes executing regression tests in the pipeline and reviewing all testing before merging into the main branch. The deployment stage ensures that defects are not allowed into production.
Because once we reach that main branch, by default, and most teams, it's on its way out the rest of an automated process for our pipeline. So the first goal here really is, let's not merge something that we know is going to stop our pipeline. And so coming out of this stage, we also want to ensure that there's effective end-to-end test coverage that's related to my change. And then finally, and perhaps most subtly, we also want to know that we're setting the team up for long-term success. And I'll talk about that more in a moment. But before any of that, let's make sure we're not breaking anything before we merge this into main.
So in this particular example, let's say we have a set of Mable smoke tests that run automatically and continuously as part of our build process. So as soon as you put up your PR, we're running a set of headless smoke tests. And if your build fails, you won't be able to merge that PR or get it approved. So you'll know that you're filling those core headless tests and all that's actually automated.
So as another example, when I talked about effective test coverage, using a low-code framework like Mable, anyone can actually participate in reviewing and providing feedback on the logic of tests. Is the test actually structured correctly? Are we fully validating the feature with the right assertions? And you can see here that it's intuitive. Anyone can review this test and understand. You don't have to necessarily understand the nuances of the framework or have a development background. So we really can avoid silos here as well. But there's also an area in this type of automation where having specialized knowledge is definitely important, especially around approaches to reusability, set-up, tear-down, environments, and overall test coverage best practices. And that's important for making sure we're setting ourselves up for long-term success. And this is a moment where you can really have some automation expertise on the team. Perhaps you have a central automation lead that's participating here as well, and they can work with your various point teams to ensure that we're not incurring tech debt.
So another thing that we can do at this stage, from a collaboration perspective, is that we can execute the regression tests that have been created before this release. And we can do all that right in our pipeline. So if your team uses preview environments or ephemeral environments, where you put your changes up for PR, we can run those full regression suites and cross-browser tests at that stage. And so we actually can get all that information within the context of the PR itself. And so before I approve that change or merge those changes into the main branch, I can And you can also review all of the testing that's happened in all the code, including whether or not we validate our core scenarios. And you also can actually click right into the detail from the PR as well.
So next, we're going to hit our deployment stage. And let's say that we know everything looks good in terms of the core functionality in the code. We've gone ahead and merged those changes into our main branch. Now this is all really in our automated pipeline. And so the goal here is let's make sure that we're not going to allow defects out into production.
6. Comprehensive Testing and Quality Understanding
We want to ensure comprehensive testing and a broader understanding of quality. We can quickly identify and triage issues with failed tests. Creating issues in JIRA from test results provides all necessary information for developers. We can detect quality issues even when tests pass, such as JavaScript errors and broken links. Monitoring page load time helps identify performance issues. Data-driven testing allows for easy expansion of coverage without writing code. Libraries like Faker and Math.js enable data randomization for realistic test scenarios. Testing across devices, including mobile, is crucial for responsive applications.
We want to make sure that we have comprehensive testing related to our changes before any of this gets deployed. And we also want to have a broader understanding of quality beyond pass or fail. And that includes a comprehensive understanding of the change and what its impact is on quality overall.
So there are a couple of examples here as well. The first is in that stage, let's make sure we can identify issues and triage them as quickly as possible. So let's say here we have a failed test and enable. We actually have a button where you can create an issue in JIRA directly from your test results. And when you create that issue in JIRA for a test failure or a bug report, you can see that it automatically populates all the information that a developer needs to understand the issue. And that includes the screenshot, the HAARF file, the DOM snapshot to ensure there's an understanding of the status of the product during that test failure. And this also allows you to avoid any unnecessary back and forth between teammates during the triage and investigation process.
So next, let's move beyond passing and failing tests to really developing a shared view across the team in terms of quality. So maybe we'll automatically detect other quality issues, even when your tests pass, that we see any new JavaScript errors in our console after our latest deploy, do we have any new broken links within the application that were a part of those tests? And for each test, we're also able to see across all of your steps, what the total page load time was, and how that's actually trending. So in this particular test, we can see that it took about 20% more time than it usually takes, which allows us to identify trends and performance issues early on.
So now here's an example of what we can do in this stage to really focus on expanding coverage. So you all are likely familiar with this concept of data driven testing. And once your test actually exists, with just a few clicks, I can actually add in a variety of different scenarios that I want to test using data tables enable. And that doesn't require writing any code. And it really allows me to multiply the coverage that I have. So adding new scenarios is really just as simple as adding a new row to this table and typing in additional values. And so again, this really focuses on making testing more accessible. Even as a product person, I can really easily contribute to this. So another exciting aspect of this is that without writing any code, you can also take advantage of libraries like Faker or Math.js. And this allows you to randomize your data to increase test coverage by creating realistic data that you can tailor to your specific use cases or scenarios. Which is especially helpful if you're testing various form inputs or looking to generate test data. And we can actually also expand coverage across devices. So perhaps I'm looking to test a responsive application. So not just focusing on testing across major browsers. I'm also testing across different devices. And reviewing those changes in an intelligent testing service to confirm my application is appropriately responsive. And really testing across mobile is becoming more critical than ever.
7. Mobile Testing and Low-Code Tools
In 2020, over 60% of U.S. website visits originate on mobile devices. Mable enables validation of user experience across responsive applications. Low-code tools like Mable help the entire team participate in quality and achieve higher test coverage. By incorporating machine learning and intelligence, tests can develop alongside the application. Democratizing testing allows everyone to build and maintain tests across the development lifecycle.
In 2020, over 60% of U.S. website visits actually originate on mobile devices. And historically, mobile testing is not an easy task. Mable allows your team to validate the user experience across responsive applications and deliver a seamless experience for your users regardless of the device they may be using. And, with the benefits of low code, this is also another good example where you don't have to have highly specialized automation experience. And so I hope those are some good examples of where low-code intelligent tools like Mable can help get the entire team to participate in quality, which will help you get test automation coverage and confidence that you need to help innovate quickly. And many teams are currently on this journey. And we're also really excited to see so many teams seeing an order of magnitude benefit in terms of achieving higher test coverage and reducing the maintenance burden associated with testing and, in general, reducing the amount of effort they spend on regression testing. So this is really what we're working towards. We're looking to enable the move to DevOps by integrating testing deeply within your workflows, making sure that it's fast and flexible for the entire team that works with modern stacks, whether you're running in CICD, using single page application frameworks or otherwise, that everyone in the team can actually participate in quality. And we also want to ensure that the tests we build are robust and reliable. By incorporating machine learning and intelligence into automation, tests can continue to develop alongside your application. And once we have all of these key pieces, we're democratizing testing to allow everyone to build and maintain tests across the development lifecycle.
Q&A on Poll Results and Mabel's Coverage
Thank you all for your time today. Let's discuss the poll results. Most people have some DevOps with automation. It's a journey to achieve more automation and efficiency in pipelines. We have a question from Jumuru about Mabel's coverage report for integration tasks. Our coverage includes page and release coverage, providing metrics on performance and test results. Next question is from Elias.
So thank you all so much for your time today. I'm really looking forward to hearing all of your questions. Hi, Juliette. Thanks for the lecture. It was awesome. It's such a pleasure to be here. Let's discuss a little bit about the poll results. So we have the poll results on screen and it seems that most of the people or the majority have some DevOps with some automation and there are less percentage for the other answers. So what's your takeaway on that?
Yeah, you know, it's really interesting. I do feel like, you see a lot of folks here kind of falling more in that middle group. And I feel like journey is truly the right term for this experience, because it does take a lot of time and energy to reconcile these various tool sets and implement tooling and automation. So I think this is really indicative of a lot of folks now, moving through that journey and moving through that process of achieving more automation, more efficiency in their pipelines.
Yeah, exactly. As soon as we are progressing into that direction, I think that's a good sign, right?
Yeah, absolutely. I think it's one of those things where you're never fully done. You're always working towards it. So I do hope my talk provided some insight and how you can start moving in that direction as well, especially if you're in that aspiring stage. Exactly, yeah. So we have some questions from the audience. And the first one is from Jumuru. I apologize for not knowing how to pronounce your name. So can Mabel give me any coverage report of integration tasks? And if so, how is that calculated?
Yeah, that's a really interesting question. So our current implementation of coverage in Mabel is specific to all of your tests within Mabel. So we do offer both browser and API testing, if you're looking to integrate testing in that way. The current way our coverage works is we offer both page coverage and release coverage. So how are you testing pages across your application? Are you validating aspects on that page using assertions and different additional types of validation? And then our most recent release is release coverage. So that uses your existing Mabel tests to determine how many tests that I run for this current release, whether that's a timeframe or something along those lines, how many of those pass and failed. And we also provide you additional metrics around performance and information on that line to give you a better understanding of how your releases are progressing over time.
Great. And the next question is from Elias.
Implementing the Process in Small Companies
The process can work for small companies, especially if they work closely with their development team. Mabel aims to make testing easy and accessible for any team member.
Do you think this process will work on a small company? The process that you lectured to us today? Yeah, I think that's a great question. We certainly do have smaller companies here at Mabel. We have quite a few startups using our product, even with QA teams of two people. I certainly think it is possible, especially if you're working closely with your development team, to really incorporate through that throughout your entire pipeline. I certainly think it's feasible. A big part of our goal at Mabel is to make testing as easy as possible and available for anyone across your team. I certainly think it's possible to bring it to your company regardless of the size. I agree with that.
Handling Small UI Changes and Test Subsets
At Mabel, we have the concept of labeling tests, allowing you to run a subset of tests based on specific features or environments. This is a great solution for handling small UI changes and saves time by only running relevant tests. Starting with a SmokeTestSuite covering main scenarios ensures efficient testing.
The next question that we have here from Mikus. Do you run all end-to-end tests even if there is just a small UI change? How do you handle such situations? Is it possible to run a subset of the test suite so it doesn't take too long to run? Yeah, absolutely. At Mabel we actually have this concept of labeling. You can label your test whether it's for a specific feature, a specific environment, anything along those lines. If I'm making a small change to a form, I can easily say, OK, I only care about the test related to this page, and then run that subset. You're still only able to cater and tailor it based on your needs. I think that's a great solution and very useful for many use cases. Usually, you want to run, for instance, a SmokeTestSuite first, which covers all the main scenarios, and then after that passes, you run the rest. Otherwise, if it fails, it doesn't make sense to run a bigger suite. Right? Right.
Running Tests Locally with Mable
You can run tests locally using the Mable command line interface, without the need to talk to the cloud or Mable. It gives you the option to execute tests against your local environment or a publicly available site, without running anything in the cloud. This is a great way to test changes locally without accessing the app or the cloud.
Exactly. Another question here, will I be able to run the tests locally without any requirement to talk to the cloud and or Mable? Yeah. Actually, so we do offer, as I mentioned during my talk, we have the Mable command line interface, as well as our CI runner. So our command line interface, it gives you the option to execute tests locally, it can be against your local environment or against a publicly available site. Does not require running anything in the cloud. That's a really great way. I've actually used it personally. And we were making some accessibility changes to our site to test against my local development branch without needing to go into the app or go into the cloud for that, for those purposes as well.
Reducing Maintenance Cost and Automating Testing
Using Mable to reduce maintenance cost is a core challenge in the testing space. Building intelligence into the pipeline, capturing test intentions, and ensuring robustness and resilience are key. Mable uses its own product to test production against development and development against production. Low-code solutions like Mable's command line interface and CI runner help automate testing in earlier stages of development, shortening feedback cycles.
That's great to hear. So the next one is from Kacper. You've mentioned that using Mable, you can reduce the maintenance cost. But actually, the public opinion is a bit different on that topic. Even judging by the slide from the previous presentation, which is the one that shows the presentation from QLAB, I think, what's your take on that? How to make sure maintenance cost is low level while using Mable? Sorry, can you repeat that last sentence? What's your take on that? How to make sure maintenance cost is at low level while using Mable?
I think that's a great question. It's one of those core challenges I feel like within the testing space. You know, as you continue to try to move faster and optimize these processes, how do you keep, how does QA keep pace with those changes as well? So here at Mable, we're really focused, you know, on the, when we talk about building intelligence into the pipeline. So when we talk about machine learning, artificial intelligence, these concepts of auto healing, and it's certainly something that, you know, it is a journey as well. I don't think anyone's doing it perfectly, but, you know, really focusing on how we're able to capture your intention as you're creating those tests, make sure that they're robust and resilient as your application changes. So the idea there is, you know, I talked about this in my talk as well, but if you make a small change to your UI, your test shouldn't break. So I feel like that's when we start talking about how, you know, we're gaining a better understanding of those attributes that are specific to your application, and really getting a better understanding of, you know, what it really means to be in the correct state and to be interacting with the correct thing. So the idea there is that, you know, as you continue making those changes, we're able to keep up with the evolution there as well.
One curiosity that I have myself, that when I talk to people that develop products that are used for software development, do people inside Mable use Mable to test the products that you build? We do. That is a great question. So we call it Mable on Mable. We have our own workspace within Mable that we use to test production against development and development against production. We're running those tests every day. We're actually one of Mable's biggest customers in that way, because we use it really all across our pipeline to test our own product as well, which is really cool to be able to do that. Yeah. I think it's nice because then you are like dogfooding your own product and you feel the pains of your own users. And you can, I find it super interesting, I worked in companies where I was able to do that and I find it super, super cool.
Another question that we have here is, how can low-code solution help automate testing in earlier stages of development? Yeah, sure. I'll try to keep it quick. Here at Mable, we do talk often about the importance of shifting testing left and really shortening the feedback cycles early in the development process. As I mentioned earlier, we have a number of different tools to help you do that. The first is our command line interface, which gives you the ability to run any from Mable tests locally for rapid feedback during development. We also give you the option with our CI runner to run those tests locally against a preview or ephemeral environments during the build process. That's when you also can use labels to make sure you're targeting a subset of your application that's relevant to your PR, but really making sure that you can get that feedback early in the development lifecycle before you even reach your main branch. Awesome. Juliette, it was wonderful having you here with us. Thank you very much. Thank you so much.
Comments