News Stay informed about the latest enterprise technology news and product updates.

CAST 2009: Challenging one of the classic ideas around testing, an interview with Doug Hoffman

At next month’s Conference of the Association for Software Testing (CAST) in Colorado Springs Doug Hoffman will call to question one of the most fundamental ideas in software testing: Do tests really pass or fail? I had the opportunity to talk with Hoffman about his conference session titled “Why tests don’t pass.”

Doug Hoffman has over thirty years experience in software quality assurance and has earned degrees in Computer Science, Electrical Engineering, and an MBA. He is currently working as an independent with Software Quality Methods, LLC. Hoffman is involved in just about every organization having to do with software quality; he’s an ASQ Fellow, a member in the ACM and IEEE, and is a Founding Member and a Director of the Association for Software Testing.

When asked to summarize his talk, Hoffman got straight to the point, “The results of running a test aren’t really pass or fail. I think this message will resonate with part of the audience and may inspire others to challenge the idea. CAST is a venue where such discussion is encouraged.”

The idea is expanded on in the summary for his talk:

Most testers think of tests passing or failing. Either they found a bug or they didn’t. Unfortunately, experience repeatedly shows us that passing a test doesn’t really mean there is no bug. It is possible for bugs to exist in the feature being tested in spite of passing the test of that capability. It is also quite possible for a test to surface an error but it not be detected at the time. Passing really only means that we didn’t notice anything interesting.

Likewise, failing a test is no guarantee that a bug is present. There could be a bug in the test itself, a configuration problem, corrupted data, or a host of other explainable reasons that do not mean that there is anything wrong with the software being tested. Failing really only means that something that was noticed warrants further investigation.

“I think all we can really conclude from a test is whether or not further work is appropriate.” Hoffman said. “The talk goes into why I think this, and some of the implications of thinking this way.”

When I asked Hoffman what inspired him to question the binary nature of a test, he said: “I was discussing the value (or lack of value) of pass/fail metrics when it occurred to me how bogus the numbers were, and some of the reasons. That led me to think through what ‘pass’ and ‘fail’ mean.”

So where does this leave teams who use pass/fail metrics? What does Hoffman see as a better alternative? Instead of a world of pass/fail, which doesn’t inspire additional work or thinking about the problem, he sees a system where a result might lead you down the road to additional investigation or bug reporting. With each result you have to ask additional questions before you move on. It challenges the tester to evaluate when they are really done with something, or if they’ve gotten all the value they can from an activity.

“Even with exploratory sessions, we conclude whether or not there are problems to report now and further avenues where we think we’ve detected problems, or not. For discrete test cases it is much clearer whether or not further work is indicated. In any case, most people refer to the software as failing or passing based on these indications.”

“The idea of a test passing/failing, indeed the idea of discrete tests, may be foreign to some people who have only known exploratory testing. In those contexts there may be audience members who may challenge that tests don’t pass or fail because the concepts aren’t applicable.”

So for Hoffman, testers doing exploratory testing face this issue all the time and already have methods for dealing with it. “There also could be criticism that I look at test results as being binary,” said Hoffman. “Others may consider there to be more than two outcomes. Again, I think it depends on how pass and fail are defined.”

In the past, Hoffman has done extensive work around test oracles. An oracle is the principle or mechanism by which we recognize a problem (that is, it’s how you can tell the good behavior from the bad). When asked how this work relates to his work on test oracles, Hoffman replied: “This is one conclusion I’ve drawn from that oracle work. Over the years I stopped talking about passing and failing, but had never consciously realized it.”

For more on the upcoming show, check out the CAST conference website. I also recommend, if you haven’t already, familiarizing yourself with Doug Hoffman’s work, which is available at Software Quality Methods.

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchMicroservices

TheServerSide.com

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchHRSoftware

SearchHealthIT

Close