How do you approach creating and organizing maintainable test suites, and refactoring and re-organizing pre-existing...
large test suites for future maintainability?
Size only matters when you consider the overall time investment to organize the work. Your approach to organizing the test suite should focus on creating a useful framework that tests based on risk to the software product.
What I've done in the past on large and ever-growing regression test suites is scrutinize them for invalid, low priority or duplicate test objectives at the end of each major release cycle.
The first thing to consider doing is organizing tests into reasonable functional folders or collections of tests related to the same function. Then create a risk analysis comparison to help judge which functional areas carry a higher risk to the quality of your software.
In a healthcare application, for example, a functional folder of tests on how medication interacts with allergies has a higher risk factor than a functional folder for configuring display fonts and text sizes. One concerns patient safety and the other a less important display option.
A risk analysis grid can be created as a spreadsheet, table or document within the tool set the company already has. The risk analysis provides three advantages for organizing and maintaining test suites:
- A priority of test execution based on functional area folders.
- A priority of which tests are maintained first.
- Tests can be located by functional area.
What I've done in the past on large and ever-growing regression test suites is scrutinize them for invalid, low priority or duplicate test objectives at the end of each major release cycle. The first time, I went through approximately 7,500 tests and inactivated approximately 1,800 tests because they were duplicates of existing functional tests. They were written for specific defect fixes when a test already existed.
Maintaining test cases so they remain valid is difficult. My suggestion is to inspect test suites quarterly or as time permits within the application release cycle. This practice can be incorporated into regression execution or carried out as a separate activity where new test cases written for the latest features are merged into the existing test suite. Inevitably, newer tests will replace existing tests as workflow and features change.
Related Q&A from Amy Reichert
In-depth design discussions work for Agile development teams, replacing the creation of design specifications.continue reading
What are the top three application performance testing objectives?continue reading
Careful planning must happen when developing a mobile application design. Read an expert's reasoning.continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.