Five rules for using software quality metrics

Software quality metrics can either tell us what needs work or disguise the real root causes. If you want to play the testing metrics game, you have to play by the rules.

Software quality metrics can be very useful. Proper metrics help effective project managers monitor progress, create and maintain schedules, account for costs and ensure quality. However, in order to provide valuable information that will help track project goals, test metrics must be designed to measure effectively, efficiently and objectively. Furthermore, the consumers of those testing metrics must evaluate the information honestly, fairly and without bias or subjectivity. Otherwise, metrics become just another opportunity to mask the truth with statistics.

Therefore, if you are going to "play the metrics game," you need to know the rules. In fact, to get real benefits from the metrics game, project managers have to be willing to teach the rules to the rest of the organization. In this article, I'll discuss the five most important rules for developing and using software quality metrics. Remember, these rules apply to everyone involved in the metrics process, including project managers, project stakeholders and project teams, as well as the project management office, senior-level management and even the CIO.

Rule 1: Develop metrics based on what information is needed and who needs it

Every metric must provide important information to at least one stakeholder, and that stakeholder must use it. Don't look at industry standards and start blindly collecting the metrics that the rest of the industry is using.

To be effective and useful, metrics must provide information about the organization's specific project issues. Start by evaluating these issues. What are the problems that are causing projects to miss their goals? What information would provide early warning signals of potential setbacks? This analysis is the basis of choosing the right metrics.

Don’t look at industry standards and start blindly collecting the metrics that the rest of the industry is using.

Next, identify the stakeholders who need the information. This is the manager or team member who is able to take action based on it. Don't provide metrics for the sake of providing metrics. When a stakeholder requests a metric, always ask what action he or she will be taking based on that metric. If that stakeholder won't be taking any action, either find the stakeholder who will be acting on the metric or don't produce the metric.

Rule 2: Keep metrics simple

Every metric must be easy for the stakeholder to understand. It must also be straightforward for the project manager to collect and report.

When metrics development becomes too complicated and metrics are produced too frequently, the hours lost generating metrics decreases productivity rather than improving it. Producing metrics too frequently can also disguise longer trends and make analysis more convoluted. It can reduce the effectiveness of the metric, especially when emerging trends are missed.

Don't let the time spent in metrics collection and reporting outweigh their benefits. Automate the data collection and calculation, if possible.

Rule 3: Measure against an objective standard

Each metric must be tied to a relevant objective standard. This doesn't mean we need an international industry standard. It means we need a stable number against which to measure.

Testing metrics should provide information about risks to the project's schedule, cost and quality. A definition of what constitutes acceptable levels of risk allows project managers to make meaningful evaluations. Metrics are meaningless unless they are compared to a standard.

For example, assume the number of test cases executed this week is 50. With just that number, we have no idea how productive this week was. If we know the average number of test cases executed per week for the past 50 weeks was 10, then we know this week was very productive and we can congratulate the team. On the other hand, if the average for the past 50 weeks was 150, then the team is not doing so hot and may need encouragement or guidance.

Tying metrics to objective standards makes them more meaningful and easier to act on.

Rule 4: Standardize the components of the metrics

Everyone involved in developing, analyzing, reviewing and testing metrics must understand the definitions of the components that make up the metrics.

For example, in order to measure defects by their severity, we need a standard definition of each severity level. These definitions need to be as clear and unambiguous as possible. Clear definitions make it easier for reviewers to avoid subjective biases in their assessments. The metric will be only as objective as the objectivity of its components.

Avoid developing metrics that are solely based on numbers, as this doesn't take into account the level of importance or risk. For example, measuring test execution based on the number of test cases executed fails to consider the criticality of each test case and may provide an inaccurate picture. A team may have executed 90% of the test cases, but may not have tested the ten most essential test cases.

Rule 5: Metrics are a tool, not a solution

Metrics must not be seen as an end unto themselves.

Good metrics provide relevant information on a timely basis and that gives stakeholders insight into issues and risks. However, the metrics in themselves don't provide solutions. The metrics are one way to recognize problems as they begin. It is up to those to whom the information is provided to develop creative strategies to mitigate those problems.

Next Steps

Compare these rules with Vasudeva Naidu's rules for testing metrics

Leverage the right metrics to make your SLAs work harder

Turn social media metrics into better business decisions; 

Keep a closer eye on your cloud apps with cloud monitoring

Dig Deeper on Topics Archive