Do you think tracking preproduction defects is important? Why or why not? If not, what's the right thing to be looking at?
Tracking defects that occur in preproduction grants project managers some extra insight into the cost of creating extra overhead. Some software quality professionals will think, "Of course we have to track bugs!" Others will think, "If we can just have a conversation and fix it, there's no need for the overhead." I think the best path is to find the right middle ground between the two extremes.
I see two sub-questions that hold the key to answering the main question:
- Is the cost worth the overhead?
- If you aren't tracking defects now, would implementing this be at the top of your list of improvements?
Why track preproduction defects?
Here are a handful of reasons management might want to track defects:
- To compare performance of developers and testers;
- Provide insight into what is happening;
- Remember details and deferred bugs;
- Change the test strategy to adapt to risks in the code; and
- Eliminate waste by finding causes of defects and preventing them.
The first claim (which I am likely to reject) is comparing performance between individuals. The act of measuring bugs in this way will change performance for the worse. Testers who are measured by bug count will seek out the easy bugs to raise their count. Programmers who are punished for bugs will waste time arguing about what is a bug versus some other kind of problem.
Tracking defects to provide insight also seems suspicious. It allows management by spreadsheet. I would prefer that management get involved in the work.
If your team doesn't fix all the preproduction bugs, and customers care about them, then tracking bugs to remember those details might make sense. My preference is only to file a bug report if the issue is not fixed, but is deferred and still worth documenting.
The fourth idea is to change the test strategy to find the defects that are actually emerging. To do this, I would look at both preproduction and production defects along with our test approach to see what defects we are missing and what tests we could run to find them.
To find the problems in order to prevent them earlier and reduce wasted efforts, we need to know how much waste is involved upstream. Tracking preproduction defects provides a log that can be useful in making these post-project investigations.
Is this the right change right now?
One easy measure is the percentage of your time spent on rework -- reproducing bugs, explaining them, fixing and retesting -- prior to production. In an article on exploratory testing, Jon Bach called these "TBS Metrics," for time spent on testing, bugs and setup. Testing is essential and should be the largest part of time spent. Bugs and setup are essentially optional, and they create drag on the project.
If the team spends too much time on bugs and setup, then consider eliminating unnecessary work to speed up the process. Track the bugs, look for patterns and work to prevent the most expensive defects. If you aren't spending much time on the bugs, or find that they are all very different, then there won't be much chance to decrease work waste this way.
If your goal is to eliminate common problems, you might succeed in a few months and then stop tracking defects until the preventable waste becomes noticeable.
The middle way
Some teams separate new-feature work from final-release testing (which some might call "regression testing"). These teams try to fix all the bugs on features first and defer any defects found during release testing in order to coordinate the work. If there are a lot of release-testing defects, this is probably a real need, but feature-level bugs might not have to be tracked.
If you are feeling some pain from preproduction defects, or it seems like you are fixing the same category of bug over and over when it could be prevented, try tracking defects for a week or two. You could use a wiki or other lightweight tracker; see if you feel faster or slower. Likewise, if you are already tracking defects and don't see value in it, try not tracking them for a week or two.
If you feel slower, drop the change or consider a different one. If you feel faster, refine the change and try to go faster still. View these changes as an experiment, and let us know how it goes!
Agile testing and defect tracking
What key test metrics should be tracked for Agile teams?
Agile teams and software defect tracking: Is a DTS necessary?
Defect tracking: Lean principles for getting the right data at the right time
Dig Deeper on Software Project Tracking and Reports
Related Q&A from Matt Heusser
What does context-driven testing mean, and how does it relate to Agile testing? Test professional Matt Heusser provides an answer.continue reading
Is there a place for a centralized test team in enterprise software testing at a company that has adopted Agile methodologies? Read Agile expert Matt...continue reading
Negative, anonymous feedback puts both testers and project managers in a difficult situation. It's hard for testers to act on vague complaints and ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.