Software testing techniques: Overcoming biases

Gerie Owen offers software testing techniques to overcome biases and boost code quality and answers the pressing "how did I miss that bug?" question.

Gerie OwenGerie Owen

How did I miss that bug?

No matter what software testing techniques we apply, we have all asked ourselves that question.

Here's a surprising answer: As software testers, we miss bugs -- even those that become glaringly obvious after the fact -- because we are hampered by our own biases. Our biases affect what we look for; how we go about designing, setting up and executing tests and how we interpret test results.

These biases can result in poor test performance and incorrect data interpretation. What we measure and what decisions we base on the data are driven by our own understandings and misunderstandings of the project and the goals of testing.

In this this tip, I explore the concept of biases and offer some software testing techniques to help overcome our individual biases.

Princeton University psychologist Daniel Kahneman and the late Amos Tversky, a former Stanford University psychologist, developed the idea of cognitive bias as a pattern of deviation in judgment. Their research demonstrated people’s inability to think critically in complicated situations. To compensate, people tend to use heuristics, or rules of thumb, to make decisions when the subject matter is complicated or when time is limited. This is effective in some situations, but can lead to errors in others. Our biases also lead to preconceived notions about situations, things and people. Our preconceived notions as testers about the application under test, the requirements or the developers affect our testing. Let’s review some biases that especially impact testers.

The representative bias

The term 'representative bias' describes what happens when we judge the likelihood of an occurrence in a particular situation by how closely the situation resembles similar situations. Testers may be influenced by this bias when designing data matrices. As a result, they may not test data in all states or not test enough types of data. For example, if the code works for a new customer order as well as an order in process, why bother testing the code to make sure it works with a completed order?

The congruence bias

The congruence bias is at work when testers test only the "happy" path and miss the negative test cases. We fall prey to this bias when we underestimate the time needed for a test cycle. Often, when we test applications on which we are "experts," we are limited by the "curse of knowledge" bias and miss defects that we would have seen if we approached the application from the perspective of a new user.

But what is going on when we miss the obvious bugs, the ones that are literally staring us in the face? This, too, is attributable to a bias known as inattentional blindness. Christopher Chabris and Daniel Simons demonstrated this bias in their famous invisible gorilla test. Subjects were asked watch a video of a basketball game and count the number of passes between players. During the game, a gorilla walked across the basketball court. Over 50% of the subjects were so focused on counting the passes that they failed to see the gorilla. The same thing can happen to software testers: We become so focused on executing our test cases that we can fail to see the obvious bug.

So how do we use the concepts of bias and preconceived notions to become better software testers? Rather than allowing our biases to hamper our testing efforts, we should recognize upfront that we have these biases and preconceived notions and plan and execute software testing accordingly. For example, we can add additional time to our estimates and include negative test scenarios in our test planning. Even though we believe that that one developer's code is always full of bugs and the other developer's code never has bugs, we should execute just as many tests on the code we perceive as bug-free as we do on the buggy code. As testers, we can help each other prevent biased testing by peer reviewing test results -- the same developers do code reviews -- or running each other's test cases.

Don't miss the Invisible Gorilla

But how do we make sure we see the "Invisible Gorilla?" As testers, we need to approach our testing holistically; we need to focus our attention on determining if this is a quality product, as opposed to just tracing our test cases to requirements and executing all of our test cases.

In addition to executing test cases, we should do exploratory testing. Conducting exploratory testing prior to running test cases is useful because we have yet to make any assessments, and potentially developed biases, about the quality of the product. Lifeguards keep themselves alert and focused by switching stations during their shifts. Exploratory testing can be used in this way by working it in between our test runs.

Finally, we can use oracles, such as "What would someone who doesn’t know how this application works do?" And when we see something we don’t believe we are seeing, we should believe it and repeat it.

Are your biases affecting the way you test software? Let us know and follow us on Twitter @SoftwareTestTT.

Dig Deeper on Software test design and planning