Help! My manager wants me to track the number of test cases vs. the number executed. What do I do?
First, let me guess that by the tone of the question, you think that is a bad thing. So, let's start by reviewing the argument that tracking test cases is necessarily negative.
Michael Bolton, a consulting software tester based in Toronto, published a blog in 2012 titled Why pass vs. fail rates are unethical. The argument Bolton makes is that information revealed by passing vs. failing can be worse than no information at all: It can be misleading. One showstopper bug, just one, might stop shipping, while a dozen cosmetic errors might not. He puts it this way:
[I]nformation revealed by passing vs. failing can be worse than no information at all.
"When a manager interviews a candidate for a job, and halfway through the interview he suddenly starts shouting obscenities at her, will the number of questions the manager asked have anything to do with the hiring decision? If the battery on a Tesla Roadster is ever completely drained, the car turns into a brick with a $40,000 bill attached to it. Does anyone, anywhere, care about the number of passing tests that were done on the car?"
While personally I can appreciate Bolton's point, and would feel very uncomfortable giving out pass vs. fail rates, it is worth mentioning that there is another side to the argument. Cem Kaner, a professor at Florida Tech, wrote a blog post shortly after Bolton's, in which he suggests that if the client is paying our salary and we make the risks clear and the client wants the number anyway, then giving the number is a reasonable course of action. Kaner puts it this way:
"Defect removal efficiency (DRE) is a fairly popular metric. It's in lots of textbooks. People talk about it at conferences. So, no matter what I say about it, my client might still want that number. Maybe my client's boss wants it. Maybe my client's customer wants it. Maybe my client's regulator wants it. This is my client's management context. I don't think I'm entitled to know all the details of my client's working situation, so maybe my client will explain why s/he needs this number and maybe s/he won't. If the client says, 'No, really, I need the DRE,' I accept that statement as a description of my client's situation and I say, 'OK,' and give the number."
It doesn't really matter if we are talking DRE, pass vs. fail rates or perhaps, those executed vs. those that remain, in order to predict when we'll be done. If you feel uncomfortable about a number, I'd suggest talking about it with the client and take into account the cost of calculating a number that might be dangerous.
Personally, I tend to work as a contractor, where I legally own my own work process. If I don't believe in a number because it is, say, an invalid measurement (not all test cases are equal), then I won't publish the number -- but not everyone has that luxury.
Finally, it is possible that you asked because you don't write test cases, you do testing, and management is so separated from the work that they don't even realize you are tracking the work with a different method. In that case, you could look into something like session-based test management, which can produce metrics that stand up to scrutiny; but, honestly, I'd be more likely to suggest having a frank conversation with management about how your group does the work. That way, you can ascertain which metrics make sense.
I hope that helps. If you would like to provide a little more context, I'd be happy to do a follow-up answer.
Have a question about tracking test cases? Let us know, and we'll pass your question on to one of our experts.
This was first published in December 2013