By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
I had planned an entirely different topic for this month, but I'm sitting down to write this on Father's Day while my sons (Nicholas, age 8, and Taylor, age 4) are napping, and realizing that I've never written about what I have learned about testing from my boys.
Before I share some of these lessons, let me first share a little about me and fatherhood. For all of the dedication, time and passion I give to my career, it is not even comparable to the dedication, time and passion I give to my boys. For example, I stopped consulting for a while so I could see my boys every day when they were young because I couldn't stand the thought of being on the road for their first steps, new words, and all of the other developmental wonders that occur on almost a daily basis during the first several years of life.
When I went back to consulting, I started my own company -- not because I wanted to run a company, but because I didn't want to have to answer to anyone else when I chose to not travel during baseball season so I could coach my son's team. In the same spirit, when I work from home, I frequently do so in a room with my boys, who are naturally curious about what I'm doing. Over the past few years of this, I've learned a lot of things about being a good tester from them. Some of the most significant are these:
Don't be afraid to ask "Why?" As testers, we know to embrace our natural curiosity about what applications or systems do and how they do them, but we have a tendency to forget to ask why someone would want an application or system to do it. We tend to assume that if the application or system does something, some stakeholder wants the application or system to do that thing.
My children make no such assumption. Every time I start explaining to them what the application or system I'm testing does, they ask why it does what it does. No matter how many times they ask, I'm always surprised when I find that I don't have a good answer. More important, I'm amazed by the number of times, after realizing that I don't have answer and posing the question to a stakeholder, I find that they don't know either. In my experience, this has a strong positive correlation with rather significant changes being made to the design or functionality of the application or system.
Exploratory play is learning. Over the years, I have found that many testers seem to limit their testing to what they are instructed to test. My children have no such "testing filter." They are forever asking me what a particular button does or asking if they can type or use the mouse. Invariably, they find a behavior that is obviously a bug, but that I'd have been unlikely to uncover. The most intriguing part is they can almost always reproduce the bug, even days later. They may not have learned to do what the application was intended to do, but they have learned how to get the application to do something it wasn't intended to do -- which is exactly what we as testers ought to do.
Recording your testing is invaluable. When Taylor was younger, he couldn't reproduce the defects he found. All he knew was to call me when the things he saw on the screen stopped making sense. Recently, I found a solution to this (since we all have trouble reproducing bugs, at least sometimes). I now set up a screen and voice recorder so that after I'm done with a test session, I can play back and watch the narrated video of the session. I can even edit those videos and attach segments of them to defect reports. Besides, Taylor loves watching the videos and listening to his voice as we sit together and watch the video of him testing while I took a phone call or did whatever else called me away and left him alone at the keyboard.
"Intuitive" means different things to different people. The more we know about what a system or application is supposed to do, the more intuitive we believe it is. My boys not only don't know what the applications and systems I test are supposed to do, but things like personnel management, retirement planning and remote portal administration are still a bit beyond them. That said, showing them a screen and asking, "What do you think Daddy is supposed to do now?" can point out some fascinating design flaws. For example, even Nicholas, who now reads well, will always tell me that he thinks I'm supposed to click the biggest or most brightly colored button or that he thinks I'm supposed to click on some eye-catching graphic that isn't a button or link at all. In pointing this out, he is demonstrating to me that the actions I am performing are unlikely to be intuitive for an untrained user.
Fast enough depends on the user. I talk about how users will judge the overall goodness of response time based on their expectations. My children expect everything to react at video game speed. They have absolutely no tolerance for delay.
You can never tell what a user may try to do with your software. When pointing out a bug that results from an unanticipated sequence of activity, we are often faced with the objection of "No user would ever do that." (Which James Bach interprets to mean, "No user we like would ever do that intentionally.") Interestingly enough, that objection melts away when I explain that I found the defect because one of my boys threw a ball that fell on the keyboard, or sat down and starting playing with the keyboard when I got up to get a snack.
Sometimes the most valuable thing you can do is take a break. Granted, my boys didn't teach me this directly, but I have learned that when I am sitting in front of my computer jealously listening to them playing while I am experiencing frustration due to my inability to make progress, taking a break to go play with them for a while almost always brings me back to the computer in a short while, refreshed and with new ideas.
Speaking of taking a break, my boys are waking up from their nap, so I think I'm going to go play for a while.
About the author: Scott Barber is the chief technologist of PerfTestPlus, executive director of the Association for Software Testing and co-founder of the Workshop on Performance and Reliability.