In Agile environments what are typically the key performance metrics for test? Are defects tracked?
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
In Agile projects, we want our progress and our problems highly visible so that we can stay on track. We want metrics with a good return on investment – collecting and reporting the metrics shouldn't cost more than the value they provide. It's important to start simple, collecting just enough feedback so you can respond to unexpected events and change your process as needed.
The number of tests running and passing (and they all need to pass) in the continuous integration builds is a key metric. The trend is more important than the raw numbers. If we're driving development with technology-facing and business-facing tests, those numbers should grow as we deliver more code. If tests do fail, that should be highly visible. My team tracks the days where a test suite is failing. We have a rule for ourselves that our build process should never be "red" two days in a row. This helps us stay focused on our goal that we should have a stable, releasable build every day.
Code coverage is a popular metric, but it doesn't tell you everything. If you completely missed developing a feature, code coverage reports won't tell you. My team set a goal of increasing the code coverage metric by two or three percent until we achieved a number that was good enough for us. Again we're mainly concerned with trends. If coverage went down, was it because code was written without any tests, or because some code with a lot of tests was removed?
I also like to see defect metrics used in conjunction with goals. Early on, my team set a goal for ourselves that we'd have no more than six high bugs in production in our "new" code (we develop almost all our new user stories in a new architecture, "strangling" the legacy code) every six month period. We had standard defect reports such as unweighted defects by priority, and unweighted inflow and outflow over a time period, but found that nobody paid any attention to these.
It's important that metrics be used for good, helping guide the team, and not evil – punishing the team for "bad" metrics reports. Metrics used the wrong way can be de-motivating. Plan for the metrics you think you need to guide your project, and review them often to make sure you're getting the right value from them.
For more on quality metrics, see Quality metrics: A guide to measuring software quality.
Dig Deeper on Agile Software Development (Agile, Scrum, Extreme)
Related Q&A from Lisa Crispin
Agile leader Lisa Crispin explains a more organic, more Agile approach to test reporting.continue reading
When it comes to Agile planning, average time over many iterations is a more important metric than individual story estimates.continue reading
Most inexperienced Scrum teams overcommit on what they will deliver, and when. Agile leader Lisa Crispin says that does more harm than good.continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.