News Stay informed about the latest enterprise technology news and product updates.

Classic inspiration for modern software test problems in QA

Robert Sabourin is president of AmiBug.Com Inc., a Montreal-based international management consulting firm helping organizations develop quality software solutions in a timely and cost-effective manner. Throughout his career, Sabourin has been involved in all aspects of development, testing and management of software engineering projects, and teaches about testing through metaphors found in such sources as the Looney Tunes characters and other works. At StarWest, Sabourin will be talking about the testing lessons found in classic fairy tales. He provides a preview here, as well as discusses some other key testing issues.

What are the biggest issues testers face today? The number one concern people bring up is time. It's basically...

working under pressure—not having enough time to do their job well. The other is dealing with turbulence and change: turbulence in terms of changing requirements for whatever reasons, changing technological issues for whatever reasons, and changing business context for whatever reasons. Do organizations value and esteem testing as a specialty? Most people who hire me have a healthy respect for testing and the field of testing, and they're trying to build into their organizations whatever respect might be due to the role of testing. But some places I go to, not for testing mandates but for non-testing mandates, trivialize testing. They feel they can assign anyone. Unfortunately, I feel these people are poorly informed. When I get a chance to work with them, they start realizing that people have written articles [about testing], there are conferences, that people "practice" testing. So you see a bit of naivete. Over the last 12 to 18 months what are you seeing in terms of the adoption of automated testing tools? People are starting to realize that a lot of the major vendors that are promoting tools as being productivity improvements are basically full of hot air. I've seen major corporations with huge investments in automated technology that were expecting to have improvements in defect escapes from development to test, and they were getting the exact opposite effect. People are starting to realize that the place to sell testing tools is not on the golf courses of America; and just because people have claims doesn't mean it's real. Testing is a cognitive activity; it's a thinking activity. The tools in the right hands of people who are thinking people can be very helpful, but by themselves they do not improve your productivity.

That's true for any tool, but I think in testing we have been guilty of people making fantastic claims about tools, and organizations are buying them because of those claims, and realizing after a while this isn't giving me the benefit. So over the last couple of years we've been seeing a rational thinking about the tools. That said, in terms of the technical tools, the ones I'm seeing lot of are Ruby-based tool frameworks for test automation. The customers I'm seeing are getting the benefits they're expecting and they're not spending huge amounts of money. But could you get more savings with other tools? Possibly. Why use these [Ruby-based] tools? Mostly because [they're] free. So what can testers learn from classic fairy tales?
Fairy tales were stories spread from person to person, and a couple of people historically collected them like Hans Christian Andersen and The Brothers Grimm. They interviewed tons of people to put together these stories, and the stories were the way that lessons were exchanged from generation to generation. The notion of teaching by telling stories which have valuable lessons is a fantastic way to teach people. I'm a big believer in doing retrospective analysis of projects, and telling the stories of them, trying to find the critical things that failed or succeeded, and telling the story so it can go from generation to generation. Give me an example of a fairy tale a tester can learn from.
The Boy Who Cried Wolf. Someone new to testing sees a little problem, goes and complains about it; it's not a real problem. [That tester] is always escalating things, raising them too high. All of a sudden the time they do find a real problem, no one listens to them. Young whippersnappers really cry wolf a lot of the time. The trick is to build credibility before you start rocking the boat too much. If you don't build the credibility, when there is a real problem people won't listen, not because the problem is poorly described but because you don't have the credibility. So testers should work to build credibility, not just find bugs, but advocate them well, report them well, be professional. It's not just the technical work you're doing; it's a real communications thing, it's delicate. Developers are usually very proud of the work they've done, and if you're walking around saying, 'Your work is of poor quality; there's bugs in it,' people sometimes don't take that the right way. How about another example?
love Goldilocks. The porridge is too hot, too cold, just right. The mattress is too hard, too soft, just right. Everything she runs into she divides into these partitions. So whenever you find a variable in testing, I try to encourage people to partition it just like Goldilocks does. What's too hot, too cold, and just right? What are the partitions? And then when you're testing, make sure you've exercised all the partitions.

So Goldilocks teaches us about domain analysis and partitioning, and that almost any variable can be partitioned into different classes depending on who's using it. The baby bear, mother bear and father might be all happy about the way they're looking at things; the three different perspectives give three different partitions. So we always try to focus testing by taking the millions of possible values variables can have and partitioning them into useful classes from which you can pick interesting variables to test. It helps you prioritize to break things up into classes. It helps focus on what matters, or [maybe] deliberately picking things not to test. How about one of the Looney Tunes examples you use in your teaching?
A classic example is the duck season/rabbit season [clip]; it's Daffy Duck vs. Bugs Bunny. The duck is trying to trick Elmer Fudd to hunt the rabbit; the rabbit is trying to trick Elmer Fudd to hunt the duck. There's tons of testing lessons in it. One is, just because you're dressed like a hunter doesn't mean you're a hunter. Elmer Fudd can't tell the difference between a duck and a rabbit. He's asking the duck if it's a duck, or the rabbit if it's a rabbit. Just because a tester has a certification doesn't mean they're a tester. It's their ability to apply their experience and knowledge to the problem at hand that makes them a tester, not the fact that they're wearing a uniform or got a sticker from someone. What is the hardest lesson for testers to learn
Humility. The hardest to teach for me has been notions like applying math to your work, or to take things you've learned in the past and apply them to new problems. So many people don't think to apply what they already learned. I hope to enable that when I work with them.

Dig Deeper on Automated Software Testing

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchMicroservices

TheServerSide.com

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

DevOpsAgenda

Close