|
Regardless of whether or not you're in user acceptance testing (UAT) or some other phase of testing, if there is an industry-accepted percentage for use case coverage, it's new to me. I've not seen any metrics like those and I more than likely wouldn't believe them even if I did. The first problem I see with that metric is that it generalizes an industry where nothing is standard. For example:
I believe it's impossible for someone to create an industry metric, because I've never worked at two companies where either the use cases or the user acceptance testing group were similar enough to draw a meaningful comparisons between two projects. In addition, I think there are more valuable indicators for completion then percent complete metrics.
I have two required reads on the topic. The first is How to Lie With Statistics by Darrell Huff. It's a relatively fun read that covers all the basics. The second is an article which is applied to what we do (PDF) -- "Software Engineering Metrics: What Do They Measure and How Do We Know? -- by Cem Kaner and Walter P. Bond.
In general, I would encourage you to be wary of any metric you see about the industry. No one can know what the industry is doing. At best it's a sampling of the industry and a small and biased one at that. Instead, I would encourage you to think about coverage from multiple perspectives and to tailor it to your specific situation where you can develop some meaningful indicators of test completion.
A great look at the complexities in doing this can be found in Testing Education's free online black box software testing course on the topic of the measurement problem and the impossibility of complete testing. The course has a video, some slides and a number of excellent suggested readings. Knowing when your testing is complete is a difficult problem. You'll need to figure out what's meaningful to you and work with your UAT stakeholders to figure out what metrics might be meaningful in your context.
11 Aug 2008