Get started Bring yourself up to speed with our introductory content.

Five rules for using software quality metrics

Software quality metrics can either tell us what needs work or disguise the real root causes. If you want to play the testing metrics game, you have to play by the rules.

Software quality metrics can be very useful. Proper metrics help effective project managers monitor progress, create...

and maintain schedules, account for costs and ensure quality. However, in order to provide valuable information that will help track project goals, test metrics must be designed to measure effectively, efficiently and objectively. Furthermore, the consumers of those testing metrics must evaluate the information honestly, fairly and without bias or subjectivity. Otherwise, metrics become just another opportunity to mask the truth with statistics.

Therefore, if you are going to "play the metrics game," you need to know the rules. In fact, to get real benefits from the metrics game, project managers have to be willing to teach the rules to the rest of the organization. In this article, I'll discuss the five most important rules for developing and using software quality metrics. Remember, these rules apply to everyone involved in the metrics process, including project managers, project stakeholders and project teams, as well as the project management office, senior-level management and even the CIO.

Rule 1: Develop metrics based on what information is needed and who needs it

Every metric must provide important information to at least one stakeholder, and that stakeholder must use it. Don't look at industry standards and start blindly collecting the metrics that the rest of the industry is using.

To be effective and useful, metrics must provide information about the organization's specific project issues. Start by evaluating these issues. What are the problems that are causing projects to miss their goals? What information would provide early warning signals of potential setbacks? This analysis is the basis of choosing the right metrics.

Don’t look at industry standards and start blindly collecting the metrics that the rest of the industry is using.

Next, identify the stakeholders who need the information. This is the manager or team member who is able to take action based on it. Don't provide metrics for the sake of providing metrics. When a stakeholder requests a metric, always ask what action he or she will be taking based on that metric. If that stakeholder won't be taking any action, either find the stakeholder who will be acting on the metric or don't produce the metric.

Rule 2: Keep metrics simple

Every metric must be easy for the stakeholder to understand. It must also be straightforward for the project manager to collect and report.

When metrics development becomes too complicated and metrics are produced too frequently, the hours lost generating metrics decreases productivity rather than improving it. Producing metrics too frequently can also disguise longer trends and make analysis more convoluted. It can reduce the effectiveness of the metric, especially when emerging trends are missed.

Don't let the time spent in metrics collection and reporting outweigh their benefits. Automate the data collection and calculation, if possible.

Rule 3: Measure against an objective standard

Each metric must be tied to a relevant objective standard. This doesn't mean we need an international industry standard. It means we need a stable number against which to measure.

Testing metrics should provide information about risks to the project's schedule, cost and quality. A definition of what constitutes acceptable levels of risk allows project managers to make meaningful evaluations. Metrics are meaningless unless they are compared to a standard.

For example, assume the number of test cases executed this week is 50. With just that number, we have no idea how productive this week was. If we know the average number of test cases executed per week for the past 50 weeks was 10, then we know this week was very productive and we can congratulate the team. On the other hand, if the average for the past 50 weeks was 150, then the team is not doing so hot and may need encouragement or guidance.

Tying metrics to objective standards makes them more meaningful and easier to act on.

Rule 4: Standardize the components of the metrics

Everyone involved in developing, analyzing, reviewing and testing metrics must understand the definitions of the components that make up the metrics.

For example, in order to measure defects by their severity, we need a standard definition of each severity level. These definitions need to be as clear and unambiguous as possible. Clear definitions make it easier for reviewers to avoid subjective biases in their assessments. The metric will be only as objective as the objectivity of its components.

Avoid developing metrics that are solely based on numbers, as this doesn't take into account the level of importance or risk. For example, measuring test execution based on the number of test cases executed fails to consider the criticality of each test case and may provide an inaccurate picture. A team may have executed 90% of the test cases, but may not have tested the ten most essential test cases.

Rule 5: Metrics are a tool, not a solution

Metrics must not be seen as an end unto themselves.

Good metrics provide relevant information on a timely basis and that gives stakeholders insight into issues and risks. However, the metrics in themselves don't provide solutions. The metrics are one way to recognize problems as they begin. It is up to those to whom the information is provided to develop creative strategies to mitigate those problems.

Next Steps

Compare these rules with Vasudeva Naidu's rules for testing metrics

Leverage the right metrics to make your SLAs work harder

Turn social media metrics into better business decisions; 

Keep a closer eye on your cloud apps with cloud monitoring

This was last published in September 2014

Dig Deeper on Software Project Tracking and Reports

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

6 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What rules do you live by when it comes to defining software testing metrics?
Cancel
Perhaps the following is something to consider when creating metrics: I have noticed efforts made to combine the metrics meant for various stakeholders into one dynamic report that they call all review. I like the idea of trying to keep everything together, but it seems that stakeholders lose sight of which metrics and report values are meant for them, and get confused by the other data meant for another stakeholder, and ultimately need their own report.
Does anyone have experience creating a successful singular report with multiple metrics to measure multiple objectives to meet the needs of more than one stakeholder?
Cancel
My rule is use them as sparingly as possible. Metrics are really difficult. Most have problems of reliability and validity, especially when we try to measure quality.

I think lean does a pretty good job with providing some ways for us to measure flow. First, rather than encouraging measuring all the things, lean encourages management be around to directly observe the work.

In lieu of that, there are groups of measures such as touch time, churn, cycle time, lead time, etcetera. These can be used for short periods of time to collect information and discover areas to improve. 

If I had to pick one rule, it would be to use measurement for inquiry rather than control. Use them to shine a light on areas of business that you are unsure about.
Cancel
I personally find metrics to be a double edged sword. We run the risk of measuring everything and improving nothing.

If I have to focus on any particular measurement, I'd focus on the time it takes for stories to generally complete, along with how many issues were discovered in production and how long it takes to fix them.
Cancel
@Shoshannah, I think one report that serves all stakeholders is a difficult thing to achieve. As you say, it can just lead to confusion and potential drowning out of the data most relevant to each audience. With any reports, I do try to have one overall report that's pretty simplistic (high-level metrics) and then more individualized versions that speak to specific stakeholders. 
Cancel
@Michael - I think a good rule is only use metrics that provide value to you. If you're asked to provide a measurement, try to ask why - what are you really trying to understand?
Cancel

-ADS BY GOOGLE

SearchMicroservices

TheServerSide.com

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close