In my current organization, we've historically had issues with regression testing or, as we call it, retesting of defect fixes. This is an issue which plagues many software organizations. In this tip, I cover how we've improved our defect regression or retesting processes with a three-step process.
What goes wrong in regression? Here's the most common scenario:
- A tester or a customer finds and logs a defect.
- Development fixes the defect and returns it to the tester.
- Next, the tester tests the fix and declares it complete.
- Unfortunately, the first time a customer sees the code, the defect is re-opened!
Why does this happen over and over again? Generally it happens because the tester regressing the defect fix only regresses the specific steps reported in the bug. The tester doesn't think systemically. This bug bounce can be annoying to developers, testers, management and to the customer.
So, what's the fix? With a simple team-wide strategy, however, the bug bounce game can be prevented. Here are some tips on how to prevent the bug bounce game.
Three steps to solid regression testing
On my teams, we have a three-axis approach to defect regression/retesting: first, we test the repro steps; second, we test around what the user was trying to do; and, finally, we test around the code change.
- First, in our organization we retest a defect by following the exact repro steps. Why? Well, there is no sense in fixing a defect if you don't address the steps reported by the user! I ask testers to follow the steps very carefully. Surprisingly often, we find the developer didn't address the issues as reported by the customer. Nothing irritates a customer like reporting a bug, being told it has been fixed, and finding out it wasn't! This is your first impression on a defect report, so make sure the customer feels good about your fix. Be very attentive to environmental issues like operating system configuration, browser version, etc. Set up as similar an environment as possible in order to retest the defect.
- Next, I tell my teams to think about what the user was doing when they reported the bug, and test around that activity. For instance, if the user opened a bug because Microsoft Word's font dialog crashed when Bauhaus 93 was selected, I expect the tester to regress the exact steps -- selecting Bauhaus 93. Then I also expect the tester to test around the change, selecting other True-type fonts and make sure they all work. Check a few other font types, too, like Postscript fonts or bitmap fonts. Once again, it's critical that testers think about what the user was doing. For example, maybe the user reported having text selected, so try selecting a word, two words, part of a word, a sentence, a sentence plus part of another sentence, an entire document, multiple discontiguous selections, etc. Dig deep and poke around the change. This is where you bring to bear such concepts as equivalence-class partitioning, boundary conditions, permutations and other categorization techniques.
- Finally, I require that my teams work with development to understand the change from a coding perspective. How was the code fixed, what changes were made, and where was it affected. Perhaps the bug was reported as changing fonts, but the actual code was in the selection redraw layer. If that's the case, I'd expect my testers to dig deep into selection redraw. Make sure selection redraw works on a document paint; moving the window, scrolling the document, changing pages in print view, etc. Make sure it works when applying style changes. Try it with functions such as spelling or grammar selection and so on.
To recap, approach defect retesting from three axex. First off, walk through the steps recorded in the defect itself. Secondly, approach from a user perspective, asking what the user did to get into this state and exploring what the user could have also done. Finally, retest where the code was changed as well as any functionality which might be affected by a change in the data structure or the likes.
Implementing this process has had a significant impact on our team. True, it has increased the time required to work through the retesting of fixed defects; however, overall it's had a positive impact on our development cycle. Defects are bouncing less, and we are reworking code less. Our overall time to release is down. Most important to me, customer satisfaction is on the rise and our customers' confidence in our ability to deliver on-time and with quality is going up significantly.
About the author: John Overbaugh, Director of Quality Assurance for Medicity, Inc., is a test leader with 13 years of experience in product and project IT, focusing on quality and defect prevention. John's background covers pretty much everything from consumer applications to high-availability enterprise server applications and highly scalable Web services. John's strengths and key experiences include test strategy, outsourcing/offshoring and the test process. His emphasis is effective and efficient software engineering.
Learn the necessary steps to make DevOps work