The Life-Changing Magic of Tidying Up your Test Warnings

Rate this content
Bookmark

Even though we write tests for our web applications, the reality is that bugs still happen. Fortunately, many of these are easily preventable paying more attention to the warnings from our apps. However, it's often so easy to put them under the rug and never come back until we find a bug in production, which leads to hundreds if now thousands of warnings appearing in our test output. This talk is about how to prevent this situation and how to get out of it.

This talk has been presented at TestJS Summit - January, 2021, check out the latest edition of this JavaScript Conference.

FAQ

Test warnings are messages created by developers of third-party libraries or other technologies that give clues about potential issues such as bugs, performance problems, and security concerns.

Test warnings accumulate because they are easy to ignore, do not cause continuous integration (CI) to fail, and are often not prioritized due to competing product and technical tasks.

Ignoring test warnings can lead to bugs, performance issues, and security vulnerabilities. It can also negatively impact developer experience by making it harder to debug issues and maintain clean code.

Automation can help manage test warnings by setting validations for certain patterns, allowing fail-safes for unavoidable warnings, failing builds for unknown warnings, and keeping the warning registry up-to-date.

Jsreporter log validator is a small library that helps add rules to test warnings, preventing new ones from being created. It includes features like pattern validation, fail-safes for unavoidable warnings, and build failures for unknown warnings.

Fixing test warnings should be prioritized based on risk and effort. Warnings that could lead to potential bugs should be addressed first, while less critical ones like deprecation warnings can be deprioritized.

Establishing an anti-warning culture is crucial because it encourages developers to address warnings early, preventing the accumulation of technical debt and improving overall code quality and developer experience.

Organizing and distributing the work can be done through simple analysis and distribution. An 80-20 analysis can help identify the most problematic files, allowing teams to prioritize and tackle warnings piece by piece.

An example is a small application with a book inventory where sorting functionality fails for one attribute due to a warning about unique keys in React. This demonstrates how ignored warnings can lead to bugs.

If you already have a lot of test warnings, you should prevent new ones from being added by using automation tools and systematically address existing ones through prioritization and distribution of work.

Victor Cordova
Victor Cordova
8 min
15 Jun, 2021

Comments

Sign in or register to post your comment.

Video Summary and Transcription

Today's Talk focuses on preventing test warnings in software development. Test warnings are often ignored and can lead to bugs, performance issues, and security concerns. The speaker introduces a library called jsreporter log validator that automates the process of adding rules to prevent new warnings and fixing existing ones. The library provides a summary of expected behavior, failures, and actions to take. Overall, the Talk emphasizes the importance of paying attention to test warnings and using automation to improve developer experience and prevent issues in large and legacy applications.

1. Introduction to Test Warnings

Short description:

Today we're going to be talking about preventing test warnings, with two goals in mind: preventing bugs and improving developer experience. Test warnings are messages created by developers to avoid bugs, performance issues, security concerns, and more. Warnings tend to accumulate because they're easy to ignore, don't make CI fail, and are often not a priority. Ignoring warnings can have consequences, as I'll demonstrate with a small application example.

Hello, everyone, and thank you for joining to the life-changing magic of tiding up your test warnings. Today we're going to be talking about preventing test warnings, with two goals in mind. The first one is going to be to prevent bugs. This is the most important one, and the second one is to improve developer experience.

A little bit about myself. My name is Victor Cordova. I work at TravelPerk, a Barcelona start-up. We're building the world's best business travel platform. If you're interested, please feel free to join us.

All right. So let's start by asking what are test warnings for? Test warnings are essentially messages created by developers of third-party libraries or other technologies that give us clues about what to avoid. For example, we want to definitely avoid bugs. We want to avoid performance issues. We want to avoid security concerns, amongst many others. This is just a very small sample. We also have accessibility issues, deprecations, and so on.

Now, the thing about warnings is that they tend to accumulate with time, and it's worth it to ask why this is the case. The first one is because they're pretty easy to ignore. So essentially, test warnings are just texts being generated either in your local machine or on another server. So this text, by itself, doesn't do anything. The second reason is because they don't make your CI fail. As developers, we all know that we pay much more attention to this red color that pops up whenever something fails. And finally, because they usually are not a priority. We live in a complex world. We have product tasks, technical tasks, so warnings can easily go to the end of this list.

Now, it's important to ask why do we even care, honestly. I ask myself that. So what happens if we ignore warnings? I'm going to give you a very small sample of what can happen. This is a small application with a book inventory where we have the title, the registration date, and the condition of the book. So let's imagine I'm going to fill this right now.

2. Preventing Test Warnings

Short description:

I'm going to put fair, good, and terrible. Now, why is this concerning? Because this might very easily be your output in the test run. React will give you a warning that says, every element must have a unique key. That's why we need to pay attention to these warnings. This is the developer experience side of things. If you're trying to do TDD, if you're trying to debug an issue, nobody wants to see this. It's quite annoying. The developer experience is affected. It's a very common issue in large applications, legacy applications. But we're engineers. So let's use some automation. I created this very small library called jsreporter log validator. It allows you to add different rules to your warnings so that your team doesn't create new ones. You can add validations for certain patterns. Sometimes they have a dynamic part. You can put a maximum. You can also have a fail-safe for unavoidable warnings. We sometimes install third-party libraries that generate messages we don't want. But we cannot do anything about it at times. We also have an option to fail if an unknown warning is found.

I'm going to put fair, good, and terrible. So now I'm going to try to use this sorting functionality. And we'll see that everything but the condition is sorted.

Now, why is this concerning? Because this might very easily be your output in the test run. So everything is green, which is really not a reflection of what's happening. So React, which is just an example, will give you a warning that says, every element must have a unique key. That's why we need to pay attention to these warnings. This is the developer experience side of things. If you're trying to do TDD, if you're trying to debug an issue, nobody wants to see this. It's quite annoying. It's difficult to find important stuff. So the developer experience is affected.

Now, we can ask ourselves what do we do about it? It's a very common issue in large applications, legacy applications. And it feels sometimes like we really can't do anything. But we're engineers. So let's use some automation. So I created this very small library. It's called jsreporter log validator. And it allows you to add these different rules to your warnings so that your team doesn't create new ones. The first feature it has is that you can add validations for certain patterns. As you will see, it's not a single string for each one of the patterns. Sometimes they have a dynamic part. You can put a maximum. So you are basically saying, okay, we know we have this number of warnings of this type. But I don't want any more. The second one is that you can have a fail-safe for unavoidable warnings. We sometimes install third-party libraries that generate messages we don't want. But we cannot do anything about it at times. So we can just ignore it for now. We also have an option to fail if an unknown warning is found.