Stick around in testing long enough, and you are bound to be asked some tough questions. Some of them will be reasonable...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
and fair; some others might not be. In this tip, I'll help you identify and disarm dysfunctional test questions, starting with the classic: "Why didn't QA find that bug?"
About 20 years ago Tom DeMarco defined the unfair software question with his article, Why does software cost so much?. DeMarco claims it is not a question at all, but an assertion or a negotiating tactic. These questions also come up in software testing. Here are a few of my favorites:
- "Why didn't QA find that bug?"
- "Why is QA always the bottleneck?"
- "Why do we have so many bugs?"
- "What can we do to improve throughput from QA?"
- "Are we done testing yet?"
Taken at face value, these may appear to be fair questions. But let's look at the assertions that are hidden inside of them:
- "QA should have found that bug."
- "QA is the bottleneck."
- "We have too many bugs."
- "Testing is taking too long."
- "Testing should be finished by now."
These are evaluations and judgments. They are asked in a manner where even a vigorous, righteous defense is still ... a defense.
Trust and safety
Now these questions are a kind of trick, a technique. They assume a premise, that there are too many bugs, and then go on to assume that the testers are somehow responsible for the software.
The best way to answer questions that take a blaming posture is to never be asked -- to prevent them through education and communication. One way to do that is to be clear on what you are testing when the process starts. Then, when the bug is found, you can say "it didn't match the risk profile we all agreed on four weeks ago." Another approach is education: I know of one testing manager who gave every executive a copy of Perfect Software: And Other Illusions about Testing as a Christmas present.
Prevention is a great goal, but I'm afraid it doesn't help software testers who have a problem right now. So my goal today will be to disarm the question, render the blame ineffective and put the team in a position to move toward trust, safety and teamwork. Do this consistently, and eventually when you've got a new vice president asking the tough questions, you may just have the developers or product managers explain how the question is unfair. Hope springs eternal.
That's about enough theory. Let's say you've just been asked "Why didn't QA find that bug?" Here are few ways to respond:
1. Embrace the premise.
The assertion here is that QA should have found the bug. We could however, answer the question on its face, assuming there is a completely valid reason the team did not find the bug. Here are a few valid explanations I have used:
- Failure of imagination: The defect was a failure that no one imagined. The developer didn't imagine it; after all, he wrote it. The product management team didn't imagine it; otherwise it would have been more explicit in the requirements. And the tester didn't imagine it. It seems to me that that the whole team failed here. Perhaps, in the future, if the test team had more time to imagine out of the box scenarios, they might be able to come up with tests that would cover such a case.
Now, let's talk about schedule pressure and what it does to imagination ...
- Successful risk management: Okay. One bug got through. Fair enough. Now, let's say that in the two weeks of testing prior to this, the team was working constantly and found 15 blocker bugs, 19 critical, and 54 normal. In order to have tester for that condition, we'd need to have dropped some other testing. So, which of the 15 blockers would you have preferred we not find?
- Communications failure: We can't test for every possible system input and combination; we have to triage our test cases. In this case, we failed to triage in the right order. Now, if we had been warned that risk existed in the timing module, we would have given it more test attention. In the future, when technical staff knows of these risks, they need to communicate them to QA.
2. Challenge the premise.
Wait a minute: Should QA have found that bug? Did you have enough time? Were you aware of the risks? Was the setup reasonable? Were your requests for tools to find this exact type of bug denied? This could be a great time to point out that "given our current operating environment, I wouldn't expect the test teams to find bugs of this nature. If we want to, here's what we can change to find them ..."
3. Reply with a question.
Instead of answering, "here's why we couldn't find the bug," you could reply with a question of your own. This changes your role from victim to, well, someone actually trying to assure quality.
Here is an example of a question-based response: "That's a fair question, but I'd rather prevent these bugs that find them in test. Another question we might consider is: Why was the bug introduced in the first place? Did development write the bug? Did we fail at gathering requirements? Or communicating them?"
Doing this opens up the team to a broader interpretation -- such as, where the bug came from -- that might avoid blame and hostility.
In my experience, this can have three possible answers: Either the development team brainstorms and actually prevents the bug in the future, or possibly they shrug and say "mistakes happen" or possibly someone gets very upset. The first two cases are fine. In the third, sometimes you can reply : "I'm not really fond of the question myself." But be careful. Know your audience. We are trying to build trust and educate here, not one-up.
4. Reframe the discussion.
Instead of being tough on yourself as a team, you could point out that you are talking about one bug, only one, out of hundreds caught in test and hundreds of thousands, if not millions, of lines of code. If that's not success, what is? In other words, you can change the nature of the meeting from a blame-storming session to what it should be -- a celebration.
If the defect really was so bad the release should not be celebrated, a second way to reframe is to bring the questioner into the process. "That's a great question, Bob. Do you think QA should have found that bug? I'd be interested in the areas you see in which we could improve." This is still inviting criticism, but it is also inviting Bob to be part of the solution, instead of a critic sitting on a fence.
5. Change the subject.
In his book, Quality Software Management, Volume III, Jerry Weinberg suggests that irrelevant behavior can break a team out of blame. He uses an example like "Hey, look, there is a calico cat on the roof. I wonder how he got there?" It's hard to blame someone while looking at a kitten. But know your audience; some people will bring the dysfunctional question time and time again.
Teamwork consultant Pollyanna Pixton has an additional suggestion when this happens: Just refuse to answer dysfunctional questions.
In other words, when someone asks a question that appears innocent and helpful but is clearly dysfunctional, say nothing and look at them as if they have grown a second head. If they ask the question again, continue to stare. Eventually, they'll stop. Really.
Doormat or diplomat?
I've tried to help provide some concrete suggestions to respond to a reasonable-sounding question used unfairly, such as "Why didn't QA catch that bug?" Yet the consistent reply from my peer review of this tip was often, "But that would make my boss mad."
Yes, it might. And the traditional alternatives -- retreating or placating -- might be less painful today. Except that now you've set a precedent as a doormat. My advice: Know your value, have some confidence and get good at responding to unreasonable questions in the moment.
Yes, the boss might get mad. You might have some conflict. Five years from now, you might find yourself working at a different job. I suspect it might be a better one.
About the author: Matt Heusser is a technical staff member of SocialText, which he joined in 2008 after 11 years of developing, testing and/or managing software projects. He teaches informaton systems courses at Calvin College and is the original lead organizer of the Great Lakes Software Excellence Conference, now in it's fourth year. He writes about the dynamics of testing and development on his blog,