Software quality metrics paint partial project picture

QA consultant Gerie Owen discusses the use of two post-production metrics: defect removal efficiency and defect detection percentage.

For organizations that want an objective measurement of how well teams are doing relative to each other, what are the most important software quality metrics to track?

There are two underlying assumptions in this question. One is that software quality metrics are objective in and of themselves, and the second is that metrics are an objective way of measuring a team's effectiveness. Neither of these assumptions is entirely correct.

To answer your question, let's consider two of the most popular post-release software quality metrics: defect removal efficiency and defect detection percentage.

Both of these software quality metrics, especially when reviewed together, provide objective measurements of the releases themselves.

Defect removal efficiency measures the development team's effectiveness at removing defects prior to a release. It is calculated by dividing the total number of defects resolved by the total number of defects found pre- and post-release. Defect detection percentage measures the test team's effectiveness at finding defects. It is determined by dividing the total number of defects found before and after the release by the number of defects found prior to the release.

Both of these software quality metrics, especially when reviewed together, provide objective measurements of the releases themselves. When the metrics of various releases are compared to each other -- assuming the metrics are calculated at the same length of time post-production -- they could be used to evaluate the effectiveness of the teams relative to each other. Another way of using these metrics to evaluate teams would be to compare the ratios for each team over a series of releases.

However, there are other factors that are critical in the evaluation of teams that these metrics, or most other software quality metrics, don't capture. Some of these factors include the complexity of the code, the severity of the defects found post-production, and the relative amounts of time scheduled for coding and testing the releases. For example, one team may have high ratios, but there may have been a critical defect leaked to production that caused a loss of customers. Perhaps a customer satisfaction metric would address these issues, but that would be subjective rather than objective.

In conclusion, post-production software quality metrics may be useful in evaluating teams, but they don't provide the total picture of a team's productivity and competency, especially when comparing teams against each other.

Next Steps

Better metrics for planning and tracking data center investments

Find the project metrics you need to track your Agile team's performance

Dig Deeper on Topics Archive

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close