Go Find What We May Have Missed!

Rate this content
Bookmark

Coming into software with an exploratory testing mindset is like coming to a multi-layer canvas with lots of information and an open ended task: find what we may have missed! This is the assignment for us all in software teams in our quest for quality.


Framing the search of how our system falls short of expectations is easier when we are able to see software from its user’s perspective. However, useful tests aren’t a collection of end-to-end tests we automate, but great tests to leave behind will decompose the testing problem differently. In this talk, we learn about using architecture as a filter in decomposing tests and look at an example of taking control over the API responses to test a React frontend.


Users don’t know or care if the problem is in the frontend and services your team provided if it fails to meet their expectations but you care. Granularity of feedback matters. Recognizing the same problems in incomplete scope - half-done features or only in frontend or APIs - is a skillset the software industry needs to be building.

This talk has been presented at TestJS Summit 2021, check out the latest edition of this JavaScript Conference.

FAQ

Maaret Pyhäjärvi is a principal test engineer at Vaisala with a 25-year career in software testing.

At Vaisala, Maaret Pyhäjärvi works as a principal test engineer, focusing on quality control for testing within various teams.

Maaret Pyhäjärvi usually spends six to twelve months with each team at Vaisala, aiming to improve their testing processes.

Maaret mentions two kinds of testing: artifact creation (automation or checklists for repeatable tests) and performance testing (exploratory, improvisational testing).

Maaret believes both are important because artifact creation provides specification, feedback, regression, and granularity, while performance testing offers guidance and serendipity.

Maaret approaches API testing by using code-oriented tools like Requests in Python or SuperTest in JavaScript, depending on the team’s technology stack.

Maaret uses exploratory testing continuously, integrating automation as part of documenting her exploratory process.

Ensemble testing is a method where a group shares a single computer to collaboratively test and learn, facilitated by Maaret to ensure effective exploratory testing.

Maaret identifies relevant dependencies by reviewing architecture diagrams, understanding the operating environment, and ensuring everything runs in an end-to-end test environment.

Maaret handles test data by mocking server responses, setting up stubs, and editing data specifics to create various test scenarios.

Maaret Pyhäjärvi
Maaret Pyhäjärvi
27 min
18 Nov, 2021

Comments

Sign in or register to post your comment.

Video Summary and Transcription

Maaret Pyhäjärvi, a principal test engineer at Vaisala, emphasizes the importance of balancing different types of testing to build better teams. Testing the application with different locations reveals potential issues with its behavior. The speaker highlights the significance of testing integrations and dependencies, including libraries and operating systems. They prefer code-oriented tools like Requests and Python for API testing. Exploratory testing is the only type of testing they perform, and they encourage others to participate in it as well.

1. Introduction to Testing at Vaisala

Short description:

Hi, my name is Maaret Pyhäjärvi, and for the last year and a half out of my 25-year career, I've worked at Vaisala as a principal test engineer. I go and assess the results that we're able to provide by staying within the team as one of the team members for a while. I repeat this from team to team, usually spending six to even twelve months within one team, with the idea of leaving things better after I'm gone and helping the teams grow in the way that they do testing.

Hi, my name is Maaret Pyhäjärvi, and for the last year and a half out of my 25-year career, I've worked at Vaisala as a principal test engineer. A nice way for me to frame what I do for my work is this idea that I'm quality control for the testing that is happening in our teams. I go and assess the results that we're able to provide by staying within the team as one of the team members for a while.

A lot of times I frame my assignment as go find at least some of the things others have missed. I repeat this from team to team, usually spending six to even twelve months within one team, with the idea of leaving things better after I'm gone and helping the teams grow in the way that they do testing. I have done this over my career quite many times for various different products and teams, one of them being this particular example here on the slide where I asked a developer of an API-based testing tool for permission to test their application and use it as training material in some of the conference talks that I've done. Then they gave a podcast interview saying I basically destroyed it in like an hour and a half. This is a common experience that developers tell of me with a smile usually, hopefully at least, on their face.

And it is usually related also to the fact that I have by that time also had the conversations on, you know, I didn't destroy the application. The only thing I might have destroyed is the illusion that was never the reality. So you might be very proud of your application. You might be doing already a good job. And there still might be things that you're missing. And your customers might not be telling you.

2. Testing: Artifact Creation and Performance

Short description:

In testing, there are two kinds: artifact creation and performance. Artifact creation provides specification, feedback, regression, and granularity. Performance testing offers guidance, serendipity, and helps discover unexpected problems. To build better teams, a balance of different test types is needed, including faking components and testing with real integrations. The system should be responsive, easy to use, and secure. An example application demonstrates the use of front-end and back-end testing with mock responses.

So in all of this work that I've done, I've kind of summed it up as a recipe for better teams. How do we go about finding the things we're missing? And we start with two kinds of testing. There's the testing that is kind of framed as an artifact creation, whether it creates automation or checklists for repeatable tests for later. And then we have the other kind, testing as a performance, kind of like improvisational theater, where you look at the application. Application sort of speaks to you. It's like your external imagination. And it makes you more creative. And whatever you learn, you can then turn into the artifact creation part of the testing. You need both of these sides. They give you very, very different things where the artifact creation style gives you specification, feedback, regression, and my absolute favorite, granularity, knowing based on the results that you're getting, what was the change that broke things, and from the logs, what is now broken, without having to spend multiple hours or even days analyzing your results before getting to the actual fixes. These are things that you can get from the artifact style of testing.

On the performance style, it gives you a little bit more vague things in many ways, but also, it gives you kind of like a guidance. You know, the direction, are we going to a better direction? Is this good? Is there still more feedback, more conversations to be had? Is there something where we need to build our understanding and improve the models? And my, again, absolute favorite, serendipity, lucky accident, meaning that sometimes, some of the problems that we need to find are two interesting combinations of all kinds of things we didn't think of, that we just need to give it time. So, there's a saying, a quote by Arnold Palmer, a famous golfer, that it's not just that he's lucky, it's just that he has been practicing. So, kind of like that's the general idea with this style of testing. So, framed from the sides, we need something in the middle for the better teams.

And the thing we need in the middle is, of course, different kinds of tests. Whether it comes from the point of view of creating artifacts, or whether it comes from the point of view of performing testing and thinking what kind of things we might still be missing, we probably will test against different levels of interfaces available in the system, and try making a balanced set of all the different shapes of tests, be they small, medium, large, unit service UI, or unit integration system, or end-to-end, whichever words you end up wanting to use. You probably also will not have just these different kinds of tests where you're basically then just kind of growing the scope of it. You also probably would like to have in those better teams some kind of ways of faking, mock, stub, spy, fakes, whatever you want to call it. Ways of faking either the service responses, the data, or any of the components that you want to leave out of the testing scenario so that you can have a focus feedback. But also, you want to test with the real integrations, again, because of serendipity you are most likely going to see something different there, and that is what your customer will end up using anyway, not the mocks that you have created. You'll probably have a foundation of functionality, but also the three other key things. It needs to be responding for the customer's requests fast enough. It needs to be easy enough to figure so that you know what to do with the application. And the disfavored users should have mechanisms keeping them away from your system so that whatever the business purpose is that the system serves, the information also is safe from other people causing you harm. So this is kind of the frame that I think that we need for the beta testers.

And I wanted to give you a small example of what typically applying something like this looks like on an application. I took a small example application which was created basically for using this or showing the idea that you can have a front-end and you can have a back-end and you can mock the back-end responses. So there's a very simple React front-end, very simple React app and the possibility of changing whether you are working against the actual or the mock server is already in the user interface.

QnA