Problem solve Get help with specific problems with your technologies, process and projects.

What key test metrics should be tracked for Agile teams?

Test metrics are used to help determine whether a project is on track. Agile expert Lisa Crispin gives her thoughts on metrics such as number of tests run and passed, code coverage and defect metrics. She encourages teams to review metrics often to ensure they're providing the value needed.

In Agile environments what are typically the key performance metrics for test? Are defects tracked?

In Agile projects, we want our progress and our problems highly visible so that we can stay on track. We want metrics with a good return on investment – collecting and reporting the metrics shouldn't cost more than the value they provide. It's important to start simple, collecting just enough feedback so you can respond to unexpected events and change your process as needed.

The number of tests running and passing (and they all need to pass) in the continuous integration builds is a key metric. The trend is more important than the raw numbers. If we're driving development with technology-facing and business-facing tests, those numbers should grow as we deliver more code. If tests do fail, that should be highly visible. My team tracks the days where a test suite is failing. We have a rule for ourselves that our build process should never be "red" two days in a row. This helps us stay focused on our goal that we should have a stable, releasable build every day.

Code coverage is a popular metric, but it doesn't tell you everything. If you completely missed developing a feature, code coverage reports won't tell you. My team set a goal of increasing the code coverage metric by two or three percent until we achieved a number that was good enough for us. Again we're mainly concerned with trends. If coverage went down, was it because code was written without any tests, or because some code with a lot of tests was removed?

I also like to see defect metrics used in conjunction with goals. Early on, my team set a goal for ourselves that we'd have no more than six high bugs in production in our "new" code (we develop almost all our new user stories in a new architecture, "strangling" the legacy code) every six month period. We had standard defect reports such as unweighted defects by priority, and unweighted inflow and outflow over a time period, but found that nobody paid any attention to these.

It's important that metrics be used for good, helping guide the team, and not evil – punishing the team for "bad" metrics reports. Metrics used the wrong way can be de-motivating. Plan for the metrics you think you need to guide your project, and review them often to make sure you're getting the right value from them.

For more on quality metrics, see Quality metrics: A guide to measuring software quality.

Dig Deeper on Topics Archive