Software Quality.com

Test metrics and use case coverage during testing

By Mike Kelly

If use cases are being used to drive system testing, is there an industry-accepted percentage of those use cases to be run in UAT?
Software testing resources:
How to determine test coverage

How to design test cases from use cases

User acceptance testing vs. system integration testing

Regardless of whether or not you're in user acceptance testing (UAT) or some other phase of testing, if there is an industry-accepted percentage for use case coverage, it's new to me. I've not seen any metrics like those and I more than likely wouldn't believe them even if I did. The first problem I see with that metric is that it generalizes an industry where nothing is standard. For example:

  • What does coverage mean for a use case? Would it matter if some use cases had one test case and others had 100 test cases?


  • When we talk about percent coverage, does it matter which use cases are covered out of that percentage? Does the size of the use case matter? (What if I had 20 use cases that were one page and one use case that was 200 pages?) Or does the importance of a use case matter? (What if I had three high-priority use cases and ten low-priority use cases?)


  • Does it matter if the use cases don't cover everything we've implemented for this release? What does it mean to cover a use case in terms of quality criteria other than functionality? What if your use cases don't specify everything -- or even the majority -- of the application's functionality?


  • Does it matter that as an "industry" we don't all do UAT with the same people executing the tests? What if your UAT is done with actual users and mine is done with user representatives? Or what if your UAT is done with customer-defined unit tests?

I believe it's impossible for someone to create an industry metric, because I've never worked at two companies where either the use cases or the user acceptance testing group were similar enough to draw a meaningful comparisons between two projects. In addition, I think there are more valuable indicators for completion then percent complete metrics.

I have two required reads on the topic. The first is How to Lie With Statistics by Darrell Huff. It's a relatively fun read that covers all the basics. The second is an article which is applied to what we do (PDF) -- "Software Engineering Metrics: What Do They Measure and How Do We Know? -- by Cem Kaner and Walter P. Bond.

In general, I would encourage you to be wary of any metric you see about the industry. No one can know what the industry is doing. At best it's a sampling of the industry and a small and biased one at that. Instead, I would encourage you to think about coverage from multiple perspectives and to tailor it to your specific situation where you can develop some meaningful indicators of test completion.

A great look at the complexities in doing this can be found in Testing Education's free online black box software testing course on the topic of the measurement problem and the impossibility of complete testing. The course has a video, some slides and a number of excellent suggested readings. Knowing when your testing is complete is a difficult problem. You'll need to figure out what's meaningful to you and work with your UAT stakeholders to figure out what metrics might be meaningful in your context.

11 Aug 2008

All Rights Reserved, Copyright 2006 - 2024, TechTarget | Read our Privacy Statement