I've found that a lot of teams getting started with exploratory testing struggle with figuring out where regression...
testing fits. In most traditional testing teams, regression tests are often reused manual test scripts or the automated versions of those manual tests. With exploratory testing, you don't have the option of reusing a manual test script latter, or for handing it off to someone for automation.
Many teams doing exploratory testing leverage test automation done by developers. These could be automated unit tests, automated acceptance tests, or some other format, but they all play a role in regression as the development team makes changes. Also in larger companies, the automation team might be completely separate from the manual testing team, and they might still be building out regression tests. In the several years I worked writing automated tests, I never created an automated test based on a manual test script. So I know that those teams don't require your test scripts to be able to add value by creating regression tests.
If I'm doing both the automation and doing exploratory testing, I'll often automate any regression tests I think I might need right after I execute a test session. My typical exploratory testing session is around forty-five minutes long. While I'm testing, all I'm doing is taking notes. Afterwards I log all my defects and then I'll automate any tests that I feel might be useful for regression. I know Dave Christiansen, fellow SearchSoftwareQuality.com Expert, has the cool habit of recording Selenium tests to reproduce his defects after he's finished an exploratory test session. He then attaches the test to the defect he submits so the developer can use it to both reproduce the issue and so they have a test to know when the issue has been fixed.
If automation isn't an option for your regression testing, here are some tips that I've found useful for managing regression while doing exploratory testing:
- Flag charters for regression testing. In many test management tools, you can add custom attributes to test cases. Using these attributes, you can flag test cases for various things - like 'candidate for automation,' 'candidate for regression testing,' or 'pending review.' If you manage your charters in a tool like that, you can do the same thing for regression testing. Even if you aren't using a tool like that, you can likely come up with some naming or storing convention to accomplish the same thing.
You can also identify a convention for flagging certain parts of the execution notes for regression purposes. That way you don't necessarily review all the notes. If the team has shorthand for identifying where to scan notes for regression testing notes, it can save a lot of time. In these cases, when someone goes to run regression tests, they would pull up the already executed charters and their notes, look for the areas marked for regression, and re-execute those portions of the sessions.
- Create checklists to facilitate regression testing. At CAST 2008 Cem Kaner gave a keynote presentation on the value of checklists to software testing. In the presentation, he outlined several different types of checklists - from lists to help with data collection, to lists for heuristic triggering, to procedural checklists. Regression testing checklists can be a lightweight method of storing regression testing scope, in a way that's easy to review and communicate.
When I think about regression, I often think about a checklist of items I'll need to look at each time before we ship. For example, I currently work with call-center software. So today, before I ship software, I work with my team to regression test transfer destinations. Those destinations change often and it's something that's easily checked with a checklist that's kept up to date. Take some time to figure out if the most critical (and regression risky areas) of your application lend themselves to a simple checklist.
- Create test beds to facilitate regression testing. Another form of prep for regression testing (along with a fair bit of regression test management) is to closely manage test beds specifically for regression testing. For example, if you're testing in an SOA architecture, often you can simply set aside some test case XML for regression testing purposes. That test bed then represents the tests you'll want to run in each regression cycle. For other projects, this might be account information, order numbers, or some other data set aside for regression purposes. Once you've iterated through all the data, your regression testing is done. The data drives the testing.
Of course, all of that assumes you need to reuse artifacts to do regression testing. Many times, when I'm running a regression, I just create new charters focused on regression. If I need to do some research in preparation for those charters, I do that in the same way I would for any other round of exploratory testing.
What's important isn't how you do your regression testing. What's important is that you retain the ability to recognize when you need to do regression testing and have some method for structuring the work in a way that's effective for your team. If you're doing exploratory testing, try some of the methods I've outlined above. I favor automation when I can use it, but even without it, checklists and test beds can be easy to manage alternatives.
Dig Deeper on Automated Software Testing
Related Q&A from Mike Kelly
Every software tool is individually designed to meet various needs and requirements of projects, teams and project managers. Learn what tools experts... Continue Reading
There are multiple ways performance testing can be handled on an Agile team. An expert describes the benefits of various approaches. Continue Reading
Creating user acceptance tests out of basic software requirements documents can be a daunting task. Expert Mike Kelly points out logical approaches ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.