“I don't care about your bug reports. Only the software matters,” said author and Google’s engineering director James Whittaker at the controversial STARWEST keynote. Whittaker’s presentation challenged a number of beliefs that we’ve held strongly to in traditional software testing, including the idea that users won’t accept poor quality. “Users know software sucks. They don't care about quality. They don't want perfect software. They want us to fix the bugs,” said Whittaker, claiming that users are better than testers at testing. This point of view is completely contrary to what Caper Jones and Olivier Bonsignour, say in their book, The Economics of Software Quality. In
Quality metrics: Defect tracking throughout the software lifecycle, we looked at why some experts feel that tracking defects is mandatory for high-quality code, and in turn, customer satisfaction. In this follow-up piece, we will look at the other side of the story, the argument against tracking defects.
Only the software matters
According to Whittaker, the only important artifact in software development is the code. “It’s the only artifact that is guaranteed to be up-to-date. Test plans go out of date. Why aren't we concentrating on the only thing that matters? Why aren't we doing all the work in the code?”
As a matter of fact, using test-driven development, developers are doing unit test automation directly in the code, using those tests as a form of documentation. If the documentation is the code itself, this guarantees that the “documentation” is up-to-date. Using continuous integration and automation, builds and deployments are getting faster and bugs are being found early, not by manual testers who fill out reports, but by the code itself.
On top of that, bugs are not as costly as they once were. With the ability to quickly deliver fixes, the old rules where post-production bugs were catastrophically expensive no longer apply. Well, at least in certain industries.
Many defects never get fixed
Another argument Whittaker makes against defect tracking is the amount of time spent on documenting and recreating bugs that will never be fixed, stopping both testers and developers from making progress on the important code that will be released. “If you write a bug report and the bug doesn't get fixed, you have done damage. If it ain't gonna get fixed, don't report it,” he said emphatically.
Lisa Crispin makes this point as well in STAREAST: Agile testing and defect tracking. She writes, “I’m betting that most teams using a DTS [defect tracking system] have bugs in there that will never get fixed. The business always has to juggle priorities, and often prefers new features over fixing minor issues. Or there might be a plan to rewrite part of the system, If you use a DTS, it’s important to keep it useful and relevant. Face reality, and don’t log bugs that will never be fixed.”
Defect tracking is a poor way to communicate
Though using a defect tracking system may provide some benefits in documenting the bug, it’s usually much more efficient to have a conversation between the person who found the bug and the person who is responsible for fixing it.
Agile proponents often speak of “breaking down the silos” between development and test and find that by having the developers and testers working together, communication and collaboration are stronger and the defects are fixed more quickly without the overhead of tracking them.
Certainly communicating only through a defect tracking tool often causes frustration and friction between developers who are quick to write-off a defect as a “user error” and testers who feel the developers don’t respect their assessment of an issue. If they are communicating face-to-face, they are more likely not only to respect one another, but to more quickly understand one another and work towards a solution together.
Defect metrics can be misleading
I was in an organization once which rewarded teams in which the fewest post-production bugs were reported against their application. I remember being very surprised when an application my team had deployed was recognized for having no reported defects! Our excitement was short- lived. Upon further investigation, it was determined that no one was using the application.
This is an example of why we can’t determine quality solely on defect counts. Not only is usage an important consideration, but even the ability to be able to report defects. If there isn’t a clear way for users to give feedback or report bugs, they will most likely keep quiet. This doesn’t mean the code is high quality. It just means the users do not know how (or want to go to the trouble) of reporting them.
Defect metrics are detrimental if their used as a way of measuring a tester’s productivity or value to the team. When managers use defect metrics in this way, it will most likely result in testers reporting more insignificant bugs that will only serve to frustrate and delay the team from doing productive work.
So who’s right?
The decision of whether or not to track defects is going to depend a lot on your application and your organization. Like just about everything else in software development, there isn’t a “one size fits all” answer. If not defects, what are the right test metrics? Lisa Crispin answers in an expert response,“We want metrics with a good return on investment – collecting and reporting the metrics shouldn't cost more than the value they provide.”
Take a hard look at how your team operates and the test metrics you use. Are they adding value or unnecessary overhead? Are you using them to continually improve?
It’s a tough subject without easy answers. Start with educating yourself on what the experts are recommending, even when their advice seems contradictory. By understanding all viewpoints, you’ll best be able to decide which solution is a good fit for your organization.
For a complete resource on quality metrics, see Quality metrics: A guide to measuring software quality.
This was first published in November 2011