Get started Bring yourself up to speed with our introductory content.

An approach to test metrics that stands up to scrutiny

Matt Heusser offers advice on test case tracking that's reliable and when to deem a number untrustworthy.

Help! My manager wants me to track the number of test cases vs. the number executed. What do I do?

First, let me guess that by the tone of the question, you think that is a bad thing. So, let's start by reviewing the argument that tracking test cases is necessarily negative.

Michael Bolton, a consulting software tester based in Toronto, published a blog in 2012 titled Why pass vs. fail rates are unethical. The argument Bolton makes is that information revealed by passing vs. failing can be worse than no information at all: It can be misleading. One showstopper bug, just one, might stop shipping, while a dozen cosmetic errors might not. He puts it this way:

"When a manager interviews a candidate for a job, and halfway through the interview he suddenly starts shouting obscenities at her, will the number of questions the manager asked have anything to do with the hiring decision? If the battery on a Tesla Roadster is ever completely drained, the car turns into a brick with a $40,000 bill attached to it. Does anyone, anywhere, care about the number of passing tests that were done on the car?"

While personally I can appreciate Bolton's point, and would feel very uncomfortable giving out pass vs. fail rates, it is worth mentioning that there is another side to the argument. Cem Kaner, a professor at Florida Tech, wrote a blog post shortly after Bolton's, in which he suggests that if the client is paying our salary and we make the risks clear and the client wants the number anyway, then giving the number is a reasonable course of action. Kaner puts it this way:

"Defect removal efficiency (DRE) is a fairly popular metric. It's in lots of textbooks. People talk about it at conferences. So, no matter what I say about it, my client might still want that number. Maybe my client's boss wants it. Maybe my client's customer wants it. Maybe my client's regulator wants it. This is my client's management context. I don't think I'm entitled to know all the details of my client's working situation, so maybe my client will explain why s/he needs this number and maybe s/he won't. If the client says, 'No, really, I need the DRE,' I accept that statement as a description of my client's situation and I say, 'OK,' and give the number."

It doesn't really matter if we are talking DRE, pass vs. fail rates or perhaps, those executed vs. those that remain, in order to predict when we'll be done. If you feel uncomfortable about a number, I'd suggest talking about it with the client and take into account the cost of calculating a number that might be dangerous.

Personally, I tend to work as a contractor, where I legally own my own work process. If I don't believe in a number because it is, say, an invalid measurement (not all test cases are equal), then I won't publish the number -- but not everyone has that luxury.

Finally, it is possible that you asked because you don't write test cases, you do testing, and management is so separated from the work that they don't even realize you are tracking the work with a different method. In that case, you could look into something like session-based test management, which can produce metrics that stand up to scrutiny; but, honestly, I'd be more likely to suggest having a frank conversation with management about how your group does the work. That way, you can ascertain which metrics make sense.

I hope that helps. If you would like to provide a little more context, I'd be happy to do a follow-up answer.

Have a question about tracking test cases? Let us know, and we'll pass your question on to one of our experts.

Dig Deeper on Topics Archive

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Interesting article. I'll add this is a follow up. What do you say to managers who have signed on board to the CI/CD, Test Automation band wagon, which as a side effect of running those tests (checks) frequently may automatically generate metrics on how often or what % on average of tests are passing or failing? How do we better educate managers on why they need to be careful when they read such statistics?
I want to agree but, on any given day, through a testing-month, how do we report the testing-progress in % if we do not track the test cases executed vs total test cases, if this is not a metric to be concerned about? Isn't it better to say that testing is X% completed on so and so day vs testing is in progress for 29 days and is marked complete in 30th day ? With out this valuable metric, how can
1) the proj-mgmt understand whether testing-part needs additional back up resources, due to an unexpected event impairing the test-plan?
2) a test lead proactively plan for and engage needed resources to ensure the testing gets completed on time, on any day through out the testing?

or probably am missing something obvious here.

The whole contribution of this metric towards the project's most important deliverable might even not be significant in terms of deliverable's performance, but things like this I feel plays a major role to proactively plan for ensuring the deliverable gets released on time.

Please feel free to let me know if am wrong or missing what you are trying to suggest.