Recently, fellow SearchSoftwareQuality.com expert David Christiansen shared his post about experiences with testing ruts that he gets into and what he does to stay out of those ruts.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
What resonated with me was his description of how he sometimes doesn’t feel like working to isolate bugs:
You did x, y, and z and the app crashed, so you filed a bug report and moved on. Does it crash with just x? Are there variants of y and z that don’t make it crash? How do they work together? If you don’t know and don’t care, you need to power up.”
Dave points out what I believe is an important step for software testers. I’ve seen many testers encounter what could be critical issues, they log a defect ticket in passing with a shallow description of the problem, and they move on. Just to be fair, I’ve done it too. When this happens, I find that often there are two outcomes:
- The issue isn’t looked at immediately, or even fixed, because the description is vague, looks like an edge case, and doesn’t have clear implications past the immediate problem identified in the ticket.
- The tester misses out on a deep and rich opportunity to learn more about the application, how it was developed, and what dependencies it has. I find that some of my most insightful observations about the system, how it works, and how that relates to the testing I’m doing comes from isolating defects.
While you don’t need to track down a possible issue to the offending line of code, I think a tester should be able to draw a clear chalk outline around the issue. That means they should be able to say, with some confidence, that under what conditions it does and doesn’t occur, and what minimal set of conditions appear to trigger it. If they can, they should talk about potential impact – but only if it’s data-driven analysis and relevant to getting the issue fixed.
To that end, the following tips might be helpful for when you’re working to isolate an issue:
- Take excellent notes and keep all the evidence. This includes test execution notes in a notepad, screenshots or screen capture utilities, copies of log files, snapshots of disks or virtual images, etc….
- Work to recall what you were doing before you found the problem. Often times, if the cause of the problem isn’t obvious, it was something you did five steps earlier that triggered what you saw. If you can find the deviant step, try variants of that activity to see how else the problem manifests itself.
- If the investigation goes for more than a day, find a place to share information about the problem with the rest of the team (a wiki, SharePoint, or a defect tracking tool). I often find it useful to keep lists of the following information:
- a list of the symptoms of the problem
- a list of what variables you’ve looked at already as you try to reproduce the issue
- a list of the variable you haven’t looked at yet, but you suspect they might be related
- a list of who you’ve spoke with about the issue (and any details they provide)
- a list of possible workarounds
- a list of possible tools (or techniques you may not know or be good at) that might help
At some point, it’s important to recognize that with any difficult problem, you’ll need some stopping heuristics. Of course the one we all want to use is, “I found the problem.” However, sometimes that doesn’t happen. Make sure you have a clear idea of how important the problem is and how much time you have to dedicate to it so at the appropriate time you can drop it or shelve it for later.
For more on this topic, and dealing with other testing ruts, be sure to checkout Dave’s entire post on testing ruts and how he deals with them.