Manage Learn to apply best practices and optimize your operations.

Software testing and the business of borders

Borders, like business rules, indicate that certain conditions apply within a certain region. So, if something unexpected happens while testing software, you may have crossed a border. Your best course, says Scott Barber in this column, is to question whether it is an intended function of the software rather than jumping to log a defect.

Scott Barber, software tester
Scott Barber

I recently attended the forth edition of the Workshop on Heuristic and Exploratory Techniques (WHET). WHET is an ongoing series of invitation-only, no-cost peer workshops for experienced testers and related professionals that emphasize mutual learning, the sharing of hands-on experiences and practical problem-solving.

WHET4* was specifically organized to explore the evolving perceptions of boundary testing. We accomplished this through reports of relevant experiences from current or past projects and activities to examine and engage the complexities of software boundaries and boundary testing.

One of the first notions that the 16 of us in attendance came to realize is that the word "boundary," as applied to software testing, is not nearly as simple to understand as one might think. As an example, during one brainstorming exercise, half of the group identified more than 150 types of system and/or software boundaries that we may be interested in as testers, all in a span of less than 45 minutes.

The relationship between borders and business rules helps me to identify things to test and to react to test results that feel confusing.

One thing that struck home for me after the workshop ended was that the word "border" never appeared in two days of discussion about boundaries, although obviously borders and boundaries are intimately related. A border is commonly defined as "the outer edge of something," and frequently the line separating geo-political regions is referred to as a border. I find the notion of a line separating geo-political regions to be a particularly interesting type of boundary to think about when testing or designing tests for the simple reason that geo-political borders are typically somewhat arbitrary; knowing where the border is says nothing about what, if anything, changes at that border. And knowing the implications of crossing one border does not necessarily mean that you know the implications of crossing a different border, or even that same border if coming from the other direction.

As an example, after WHET, I drove from Seattle to Corvallis, Ore., with Dawn Haynes (one of the other participants in the workshop) and I noticed the "Welcome to Oregon" sign. I asked, "How would we be able to tell that we'd crossed the border between Washington and Oregon if there hadn't been a sign?" Over the course of a few minutes, we came up with the following list:

  • The exit numbers on the highway had counted down to 1, then reset to some larger number. (The first one I remember seeing was Exit 154.)
  • Shortly inside the border, there was a rest stop (conveniently called a Welcome Center) and a weigh station. What made this interesting was that these appeared to occur only on our side of the road.
  • Dawn remembered that color of the asphalt on the roadway changed very close to the border.
  • When we stopped for gas, not only were the prices per gallon different, but the sales tax had changed. Additionally, we discovered, where gasoline is concerned, Washington is a self-serve state while Oregon is exclusively a full-service state.
  • The sizes of the speed limit signs were different.
  • The graphic on the route number sign changed.

Obviously some of those items are more interesting than others, but each serves as a reasonable indicator that you have probably crossed a border. Collectively, they seem to be a fairly reliable indicator that some kind of border has been crossed, even though they may not indicate which one, especially for someone crossing this particular border for the first time.

To tie this back to software and testing, the state border crossing indicators that Dawn and I thought of can be loosely equated to business rules. And what are business rules other than somewhat arbitrary decisions about how software will react to a particular type of input or set of conditions within a certain region? For instance, a book-ordering Web site may have a business rule that states that shipping and handling is free for all orders over a certain dollar value, say $50. In this case, testing the calculation of shipping and handling for orders near the $50 border is probably important and likely interesting. On the other hand, if you had not been told about the business rule wherein shipping and handling is free for orders over $50 and you found while testing that shipping and handling unexpectedly dropped to $0 for an order totaling $62, you may wonder if you've found a shipping-and-handling calculation error for orders totaling exactly $62. Or you could ask yourself if this observation might be an indicator of having crossed a border and not noticed.

More Peak Performance columns from Scott Barber
Software performance testing: You can't test everything

Developing an approach to performance testing

What software testers can learn from children

In the first case, you'd probably log a defect that would immediately be rejected as "functions as designed" with no further information. If you approach the unexpected number the second way, however, instead of logging a defect, you'd probably ask something like "Is there some valid reason that I am unaware of for shipping and handling to drop to $0 near $62?" In my experience, taking the second approach tends to lead to better software because it encourages a conversation about how the software is intended to function.

For me, it feels natural to think about borders and business rules together. The relationship between the concepts helps me to identify things to test and to react to test results that feel confusing. I'm sure the relationship is incomplete and that there might be other metaphors out there that work better for other testers, but to the point that I find this one useful, I'm going to hold on to it.

* WHET4 was attended by Rob Sabourin, Karen N. Johnson, David Gilbert, Michael Bolton, Cem Kaner, Ross Collard, Doug Hoffman, Keith Stobie, Mike Kelly, Tim Coulter, Henrik Andersson, Scott Barber, Dawn Haynes, James Bach, Jon Bach and Paul Holland.

About the author: Scott Barber is the chief technologist of PerfTestPlus, vice president of operations and executive director of the Association for Software Testing and co-founder of the Workshop on Performance and Reliability.

Dig Deeper on Topics Archive

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.