This year’s STPCon had something a little bit different – hands-on sessions where not only did the presenter actually test software — the participants did as well.
The first session of the conference was Justin Hunter’s “Let’s Test Together,” which promised to not only introduce a new test design method, but to change the way we (the audience) think about software testing.
It was an interesting session. While I’m afraid I have neither a time machine nor did I get permission to video-tape the session, I do have the next best thing: Hunter’s Slides plus a summary.
The first thing Hunter did was hand out a series of ‘spools,’ representing the requirements for mortgage-origination software. The software had inputs allowing five types of credit rating, five ranges of for income, six property types, and six different locations in the United States the mortgage could originate in (slide five). Hunter asked us to look at each range (not boundaries), and come up with a number of suggested test cases; a simple 6x5x6x6 yields 1,080 test ideas — and real software of this type would have far more than six inputs.
Next Hunter pointed out the assumption behind the 1,080 test cases was that some magical combination of things were wrong. So he pulled out some historical data from the National Institute of Standards and Technology, pointing out that most bugs are tripped by either a single condition (all people in income range number one) or a combination of two conditions – say all people income range one and credit score three. Thus, based on historical data, if you “just” ran all combination of tests, you would find something like 85% of the bugs in ten or eleven test ideas.
To prove it, Hunter ran a computer program to generate those ten tests, then create a board with all 23 (6+5+6+6) options. He had an audience member throw darts onto two elements of the grid, and, yes, everything the young gent threw was covered in those ten cases.
We call this solution “All-Pairs”, or pairwise testing. After explaining the technique, Hunter drew a chart, from high-value to low-value, diagramming the kinds of problems pairwise testing was effective at (limiting configuration or input types), as well as the types if had little or no application for, such as error and exception handling.
It was a neat session and his passion came through. While Justin Hunter did not invent the pairwise idea, you might say it runs in the family: William G. Hunter, his father, was a professor of statistics and contributor to the design of experiments movement in the 1980’s.
But wait, there’s more
It turns out that doing the fancy math to generate the table, to figure out those ten test cases, is non-trivial. And, while you could buy a statistics book with pre-cooked templates that can be adapted, that takes a fair bit of work as well.
Hunter has software, Hexawise, that participants could use to speed up the process. Instead of doing the manual work, you let the computer generate the tables; you can also ask if the next ten, or the next ten, on and on, of test ideas sorted by highest probability of detection.
Yes, it does get better than this — you could win the lotto. I’m just afraid the odds on that are a little longer than 1,080 to ten.
But hey, we were in Vegas.
A 14-day trial of Hexawise is available here.