One of the best parts about going to a software test conference is hearing the latest, greatest thinking. What’s...
even better is when the experts seemingly disagree, because it provides an opportunity to explore two sides of an issue. Former president of the ISQTB Rex Black, in his opening keynote at STPCon 2011, talked about the importance of deriving metrics from defects, encouraging the audience to track defects throughout the development lifecycle. Other sessions at the same conference, including from pundits Scott Barber, Matt Heusser and Fiona Charles, spoke of cutting through bureaucratic red tape and eliminating unneeded documentation or cumbersome processes. Several times, the suggestion of simply using better tester-developer communication rather than a formal defect tracking system came up. So which is it? Do we track defects, or don’t we?
What we agree on: Fix bugs early and track production bugs
First, let’s talk about what it appears everyone agrees on: Finding and fixing bugs early in the development lifecycle is much more efficient and cheaper than finding them later. The later the bugs are found, the more difficult they usually are to fix. This is not always true, but if the defect is related to something in the design or a complex piece of code, then fixing that code may require some redesign or may break a different piece of code and the whole house of cards comes tumbling down. Sometimes the risk and time required to fix a difficult bug late in the cycle is just not worth it to the business, and the team may decide to move forward with a less-than-perfect solution in order to get the product delivered, albeit at lower quality than had the problem been addressed early.
Finding a bug later in the cycle is also more expensive, (partially because of the overhead of tracking the bug and any associated processes to fix, retest and regression test), but mostly because once that bug hits production, you are dealing not only with the costs of fixing the bug, but also the costs your users have to incur, and the hard-to-measure costs to your reputation.
Another point on which there is general consensus is the need to use some sort of defect tracking system once the code has hit production. Though there are differing opinions about what post production defect data tells us and how much data we need to gather, it does appear that experts agree that it’s necessary. So the debate here really pertains to tracking defects before the code hits production, when it’s still being developed.
The argument for tracking defects
In Matt Heusser’s keynote at STPCon, he said, “Without data, you’re just a guy with an opinion.” Defect tracking allows us to gather data. It allows us to do some analysis on the bugs that are being found, look for trends, and add more tests to (or prompt the developers to refactor) those areas that seem to be riddled with issues.
Using defect tracking also allows us to better communicate the details of the bug. Many times bugs are difficult to recreate, but in documenting them, we are better able to systematically list the steps that produced the bug, sometimes adding video, logs or other evidence of the issue. And though there is something to be said for the merits of communicating face-to-face in building relationships, if you don’t add documentation, you may lose some very important details. Documentation also serves as a way of communicating with the entire team so that the whole team can get a good sense of what still needs to be fixed. By having defects tracked, any developer who is free can work on the open bugs, which could lead to better cross-training and teaming.
But what about unit testing? Must developers track the bugs they are finding in their own code? Or what about before any code is even written? Should defects in requirements and design be recorded?
According to Caper Jones and Olivier Bonsignour, co-authors of the book, The Economics of Software Quality, all defects should be measured by origin (requirements, design, code, user documents and bad fixes). Jones and Bonsignour also note the importance of measuring defect removal efficiency (DRE). In Quality metrics: Changes in the way we measure quality – Part three, I specifically ask why early defect tracking is important, noting that Agile teams don’t always use it, but often use customer satisfaction as a measure of quality. Here was the answer I received from Jones/Bonsignour:
For more than 40 years, customer satisfaction has had a strong correlation with volumes of defects in applications when they are released to customers. Released defect levels are a product of defect potentials and defect removal efficiency.
The Agile community has not yet done a good job of measuring defect potentials, defect removal efficiency, delivered defects or customer satisfaction. The Agile groups will not achieve good customer satisfaction if defect removal efficiency is below 85%. It will be that low unless measurements are used.
But doesn’t Agile development promise better quality?
Jones and Bonsignour claim that pre-production defect tracking is necessary in order to achieve high quality, and yet many Agile teams claim improvements in quality once their team moved to Agile development. On what are they basing these claims? If they are not tracking defects, do they have the data to back it up?
In Software quality: When defect tracking is not necessary, we’ll explore the other side of this debate. Why do some Agile teams feel that early defect tracking is unnecessary and what data are they using to support improvements to quality?