We were kind of like beneficiaries of a different initiative where we had a single owner per file. This ensures responsibility and accountability for every file in our code base. So our first kind of like MVP version of our tool was literally just kind of like a very, like, I don't know what you call it, it's like year 2000 webpage where we just break down all the ESLint failures by team, by lint rule, and also by team and lint rule. So that you can kind of like visualize for your team how many are you responsible for and where are they in your code base, these lint failures. It updated nightly. It ran off someone's machine, actually, on his desk, which was unfortunately, actually, one time shut down during COVID and we had to get someone to turn it back on.
We had warnings, errors, and disables counted in each contract. So this is kind of like a view of the breakdown by lint rule where we show you the file name, if you click in this file name, it will bring you to the GitHub page. Just that easy to find out what your lint failures are for your team and where is it. That way you understand how much you're committing to when you're trying to fix something. We use a tool called Checkup, which is essentially like a note runner, where every night it would run these tasks. They come with some built-in tasks for JavaScript, ESLint, where we can run a plugin and it will run ESLint in your entire codebase and then give you a structured format. This is an example run of Checkup. It'll give you a Sarah file, which is a structured format. This is an example of the Sarah file, which we parsed in our tool to show you that nice diagram. Oh, yeah, and then we updated our tooling to the company standard of UIs. I believe this is DocuSource, just so it's easier to visualize. And every week we give each team a red, green, or yellow scorecard. Green if you've decreased your link failures or if you're at zero. Yellow if there's no decrease or increase and red if it's an increase in link failures. This is very important because what we've seen is a lot of people regress, not due to their fault or any particular kind of nefarious means, but just stuff happens and it's easy for us to see these regressions and try to fix it right away. This is like an example of a scorecard we would send out, it's a manual email, we shout out two to three engineers per email, try to really give them recognition. When they fix some lint failures, but really the kind of learning that we've found was that getting to zero was very, very hard. This is kind of like the graph that we had, through the journey, where you can see the last mile was very, very hard to get to, it took almost half the time to just finish the last couple hundred. But sustainability matters, right, that last mile really matters. This is an example of kind of like a file with lots of lint failures. It's not really nice to work on this file, right, compared to like a file without lint failures. It's much easier to understand a file, there's no hidden kind of like work for you to do. Everything is clear and cut up.
And so what we learned, focusing in the individual really helped, giving personal shout outs, and really trying to keep the reductions minimal. We kind of cleaned up over like 6,000 lint failures, took a little over a year and kind of contributed to the whole community. We had 55 unique contributors to this effort. At the end of it in our kind of like quarterly surveys, we've seen an increase of 30% in how people perceive their code quality after this initiative. And we still kind of added new lint rules while we were running this initiative with over 80 lint rules added to our config, with over again like 45 contributors adding these lint changes. And that's it.
Comments