Q
Problem solve Get help with specific problems with your technologies, process and projects.

Agile teams and software defect tracking: Is a DTS necessary?

In this expert response, Lisa Crispin discusses the pros and cons of DTS and offers advice to teams on how to best manage their bug-free development approach.

In Scrum, are all defects tracked? I’ve heard some people say that if both the developer and tester agree that...

there’s a bug, the important thing is just to fix it, not to track it. Do you agree with this? Isn’t it necessary to track in order to understand metrics and trends?

To track or not to track is a perennial debate on Agile teams. From a “lean development” perspective, defect tracking systems (DTS) containing a bug database are a queue of work that isn’t done, which is wasteful. Let’s face it, most teams have dozens, hundreds or even thousands of defects logged which will never be fixed. In addition, a DTS is not a good way to communicate. As Ron Jeffries has said, how many teams sit around the DTS having a conversation? Logging bugs in a DTS encourages siloed activities between coding and testing.

However, if defects are clearly documented along with what was learned in the process of fixing them, a DTS can be a valuable knowledge base. It can help a team identify fragile areas in the code and steer them towards improving their process.

I encourage teams to first decide on their goals with respect to defects. My team started with a commitment to eventually having zero defects released into production. Now, how should we handle defects with an eye to achieving our goals? Try different experiments. If you’re working on a brand new, greenfield project, try doing without an online DTS and see what happens.

In pursuit of a zero-defect goal, my team wanted to shorten the feedback loop between checking in the code, finding the bug and fixing it. The shorter that loop, the easier and faster it is to fix the bugs, and the more we can learn about how to prevent them happening in the first place. There are lots of good ways to shorten this loop. A developer can ask a tester to come over and test new code before she even checks it in. Often just the process of explaining the code can bring realization that something is wrong.

When I find an issue in code that was recently checked in for a user story we’re currently developing, I first go talk to the developer who checked in the code. Often he can fix it on the spot, and I don’t have to worry about tracking that bug at all. Testing is an integral part of coding, so a problem found during coding isn’t a bug, it’s simply a development activity. If the bug can’t be fixed right away, I write it on an index card so it won’t be forgotten, and put it on our task board to make it visible. This process works well for our team, but each team needs to find what works for them.

We thought that when we achieved zero defects in production, or close to it (as we are now), we wouldn’t need a DTS. However, we found our DTS useful for other purposes. When users want help with a mistake they’ve made, for example, they need to have a transaction reversed or data in the database manually updated, they can file a “production support request” in the DTS. By analyzing these issues over time, we’re able to identify areas of the application that are error-prone, and add automation or validation to help users avoid mistakes. That in turn cuts down on the number of these production support requests. Though bugs in production are rare, we do log these in the DTS. The developers feel this is the best place to store information they learned while researching and fixing the bug, for future reference.

Metrics derived from a DTS can be useful, but they can also be dangerous. Arbitrary metrics can be misleading, or used to criticize team or individual performance. This can be dangerous and de-motivating. Always start with a goal, then identify metrics that will help your team measure progress towards that goal. Never use metrics to point fingers or assign blame.

Use retrospectives to talk about how your approach to tracking defects is working. More importantly, think of experiments your team can try to prevent defects from happening in the first place.

For a comprehensive resource on quality metrics, see Quality metrics: A guide to measuring software quality.

This was last published in June 2011

Dig Deeper on Scrum software development

PRO+

Content

Find more PRO+ content and other member only offers, here.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchMicroservices

TheServerSide.com

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

DevOpsAgenda

Close