One of the questions I get very often as a quality coach is “how can we measure quality?” If I were to reply right off the bat, my answer would be “No clue!” as I find it very hard to quantify something as abstract and ever changing as quality. Nevertheless I think the premise of the question is useful, as I interpret it to be “how do we know we are doing a good job?”
Over time I have noticed that certain practices, processes or habits seem to serve as gauges pointing to whether things go right or wrong. I call them quality indicators and here are some that I find useful.
Number of reported bugs that are not “real” bugs
Some bugs reported by customers are as “real” as they can get. Orders gone missing, deleted items from their lists, system down, unable to log in and so on. Once they are reported, everybody agrees that they need to be fixed. But there are also those bugs that start the “bug or feature?” debate. These are the bugs that users report, just because they didn’t understand how to use your software. These “fake” bugs are a good indication that you are implementing something different than what your users expect. They hint to problems in user experience or help documentation. If you observe a surge in such bugs, analyzing them and finding patterns in their content might help reduce them significantly.
Time interval between customer reporting bugs and developer committing a fix
Looking at the lifecycle of a reported bug can be an interesting exercise in software development as well as a beneficial one. You can think of all the ways a customer can report an issue. Are issues reported through one channel fixed faster than those reported from another? If yes, why is that? How long does it take to decide that a change is actually required to address the issue? If a change is decided, how easy is it to assign it to anyone to fix or is there a need to contact the right set of people to do so? How much time does it take to identify the right people? You see, even if you are doing continuous deployment, you might still lose time in trying to figure out how to get started on working on a solution and that is something to keep an eye on.
Time it takes to fix automated tests
You solved the customer issue in 5 minutes, ran unit and component integration tests successfully and pushed your changes through the deployment pipeline. You are royalty and you deserve a hot cup of chocolate while you wait for the “Thank you!” call from the customer. But alas! 3 integration tests are failing, so you do the honourable thing and start fixing them. 5 hours later you are done! If you have ever had a similar experience or heard someone in your team complain about it, it is a good idea to take the time and have a look at your automated tests and how easy they are to maintain.
If anybody asked me for solid evidence that these indicators indeed improve the quality of a product, I would unfortunately have none to offer. Nevertheless, the time interval between customer reporting bugs and developer committing a fix can be used to signal changes of your lead time. The number of “fake” bugs and the time it takes to fix automated tests may have an impact on the measures of your software delivery performance as described in Accelerate: Building and Scaling High Performing Technology Organizations by Forsgren, Humble and Kim. ( Send me a message for a copy if you are interested )
Measuring quality is hard – if not impossible -, but identifying and monitoring areas that can be improved is at least, a good start.