So basically the question of what happens if my website is still loading? What happens if my element didn't stop rendering yet, or if my user interface is still changing? Am I really taking the right snapshot at the right timing? We really need to make sure to do exactly that. So making sure our application is ready to be screenshotted, because otherwise it might cause wakiness.
And the solution is rather simple, because it's the same solution as you would use for testing too. So use your assertions consciously and using them dynamically and not with fixed rating times. Wait for consistent snapshots. Wait that all load times have been completed, that all the appropriate rendering or UI changes have been made before you create the snapshot. And I know I'm really annoying in this regard, but it should be a general best practice for not to use fixed rating times, but really wait until everything is properly done.
Another point, last but not least, would be false negatives. And I think they are dangerous, because they let it look like your test is failing because something's broken, but your test fails without any errors being present. This can especially be the case for natural changes, which are not erroneous, and changes that cannot be prevented. May it be, again, time specification if you have a read-only time, which you cannot influence by the client. Or, my favorite example, which caused me some nightmares before. It's this one. It's an image in a login screen, taken from the Shopper6 administration UI, which is basically for an online shop. I guess it's just still true at the moment. It looks fairly harmless, but this image here is depending on the time. So there will be a different image depending on time of the day, and it's randomly chosen from an image pool. So even at the same time, it could be a different image, and thus causing all those notifications that something changed in the application. And we know that it's natural, but we don't want to be notified again, false negative.
The solution for this would be making the test to ignore the changes, maybe by using a pixel threshold if it's a rendering differences, blurring it, or even ignore areas or elements. So you can configure it in the service of Visual Regression Tracker or in the codebase if the Visual Regression Tracker is not enough to help you there. I use for this regard, I use an own custom commands, where I take actually the image and set it to another fixed background image which is always the same. But we need to be really careful when it comes to those interferences because this is actually what we do, we interfere with the app through the test. So if you do this, write a separate own test to make sure that for example, the image or the image selection process is really working, to not hide an error just because you interfere with the application in the test and document it so that other developers at the test and know that you are doing such things here.
Okay, so this is about visual testing best practices basically, or pitfalls I encountered. But if you want to learn more about best practices not only limited to visual testing ones, please take a look at Marco's talk about writing good tests for UBI applications because it's a general area and not only visual testing. And if you didn't have the chance to see this talk yet, please check out the recording later on, it's really worth it. And together with this talk and my talk, we will be able to have wonderful tests. So our tests are now detectives in this regard, maybe Cypress or it could be anything else, Sherlock Cypress, Sherlock Playwright, Sherlock Selenium or Red Driver, whatever you use because we make them be a bit more like the way we humans are doing testing. So not only taking a look at the things we describe, but also outside of the concept, writing or look a little bit and this really can be a lifesaver because it prevents errors caused by side effect you might not be aware about.
Comments