Most teams doing software projects in a traditional waterfall manner log defects in a defect tracking system (DTS). There are lots of advantages to using a DTS. Logging bugs ensures we won’t forget them. We can track the defects via the DTS, and analyze past issues to find ways to improve our process. However, there are other ways to deal with defects. My STAREAST talk is called, “Limbo Lower Now: An Agile Approach to Dealing with Defects.”...
In this tip, I discuss some of the highlights.
Use an automated test to reproduce the bug and act as documentation
Some Agile teams, especially those that embrace lean development, take a different approach to defects. Anytime a bug is identified, an automated test is written to reproduce it, the bug is fixed, and both the code fix and the test are checked in. The test documents the bug, and will alert the team in case that same problem occurs again. This enables teams to “fix and forget” bugs.
From a lean point of view, a DTS is a queue of rework, which translates into waste. Bugs in the code add to the team’s burden of technical debt. The fact of having a bug indicates a lack of regression test coverage, and more bugs add to the team’s load of technical debt. Making changes means more risk. Our approach to managing defects is important. Even more important is preventing those defects from happening in the first place.
DTS pros and cons
Not tracking defects in a DTS seems counter-intuitive to many experienced testers. If you fix each bug as soon as it happens, you don’t have to track it, and the test can document the bug. Still, you may feel you are losing some valuable knowledge that might be kept in a DTS, and this is a valid concern. A DTS can be an important resource to analyze trouble spots in the code and help the team find ways to improve.
But a DTS isn’t a good way to communicate. As Ron Jeffries once told me, no team ever sits around their DTS to have a conversation. Start with what your team is trying to accomplish, and log bugs in a DTS where it is appropriate.
I’m betting that most teams using a DTS have bugs in there that will never get fixed. The business always has to juggle priorities, and often prefers new features over fixing minor issues. Or there might be a plan to rewrite part of the system, so the bugs for that part don’t need to be addressed. If you use a DTS, it’s important to keep it useful and relevant. Face reality, and don’t log bugs that will never be fixed.
There are many advantages to using a DTS, of course. It provides a way of tracking defects, ensuring they won’t be forgotten, providing a workflow as the bug is prioritized, researched, fixed and verified. If bug reports and fix comments are well-written, accompanied by examples and screenshots, the DTS can be used to research new problems or do root cause analysis on past ones. For distributed teams, a DTS may be the only way to sensibly manage defects. A DTS can provide metrics and traceability. Some companies allow customers to use the DTS to provide visibility and information.
Still, a DTS is overhead. I’ve known a lot of developers who hated to use them. They can get in the way of direct communication -- it’s a lot easier to understand an issue when the person who found it sits down and explains it to you, instead of writing it up in the DTS. And will a DTS really help us towards a goal of reducing defects released to production?
Alternative ways of managing defects
If your aim is to retain information about past problems so you don’t lose the “lessons learned,” consider the best way to manage a corporate knowledgebase. Some information about problems may be better kept in a wiki, where it’s easily accessible and updateable by everyone in the company.
You don’t need a DTS in order to prioritize defects. Our team tracks issues found during development on cards, and we use color-coding to show the importance. A glance at our task board gives immediate information as to how many bugs have been found in a particular user story. If a story has lots of red and yellow bug cards, we take time to discuss what’s going wrong and whether there’s something we need to change to avoid the same problem with other stories. We consider bugs found during development to be a part of development -- it’s a good thing to find a bug before you release! So we don’t find it useful to keep metrics on those.
What about regression test failures? Our team decided to make a rule that any test that fails in our continuous integration regression suites is the team’s top priority (unless there is a production showstopper at the same time). It’s all hands on deck to make sure the regression failure is fixed immediately. This short feedback loop helps us be more productive and “go faster.” The information about these failures is retained in our continuous build system, so we don’t need to track them elsewhere.
Turning bugs into user stories
I like to look at defects from several different perspectives. Antony Marcano refers to bugs as the “hidden backlog.” They represent misbehavior and missing features. If we turn these into user stories, we know they will deliver value to users.
When my own team transitioned to Agile eight years ago, we used a commercial DTS, but set a goal to eventually not need it: zero defects in new code released to production. After a few years, bugs in production did become rare, but we were still using the DTS. I surveyed other Agile practitioners to find out how they deal with defects, and found many alternative approaches. For example, some teams turn every defect into a story, or simply write them on index cards and post them on the task board. Many teams use the “fix and forget” approach. Some still log every bug in a DTS.
I’ve given you some examples of how my team manages defects. We’ve evolved these practices over several years. Our original goal was to have zero defects in production and not need a DTS, but we decided that we want to keep information about bugs that were difficult to fix in the DTS. We also track “production support requests” which aren’t bugs, but are requests from the business and operations users to manually update data or work around the system for a special case. Think about the reasons you want to track defects, and experiment with different ways to address different types of defects.
In my conference session, I explore a range of ways to deal with defects and discuss how teams can and should experiment to find the approach that works best for their situation. Most importantly, the focus is on how to prevent them from happening in the first place. Each participant can learn a concrete action to try with their own team.
For a complete resource on measuring quality, see Quality metrics: A guide to measuring software quality.