In retesting, the testing has two stages. First, there is a focused phase where the defect to be addressed is tested. For instance, if the defect is that a shopping cart calculates tax incorrectly, the retest is to address the various equivalence classes of tax calculation: in-state, out-of-state, no sales tax, multiple taxes summed, and so forth. The second phase is a wider test phase where downstream dependencies are examined-- in the shopping cart example, the downstream dependencies would be cart totals, billing, and receipt/purchase confirmation presentation. By confirming the direct functionality and any related functionality, retesting should be a smooth process with few surprises, and defect fixing should occur without negatively impacting application functionality.
Our definition of regression testing has a much broader scope: it's about retesting an entire application after new functionality has been introduced. Let's return to our e-commerce application for an example, and assume we've introduced new functionality to the application which integrates with resellers. Now our Web application needs to determine the source of an item (from our company or from a reseller), and it needs to send order fulfillment information to that source. With all the new functionality coded, tested, fixed and retested, we're nearing the release date. The final step is to ensure the new functionality plays well with all of the existing functionality. To do this, we regression test the site.
The key to successful regression testing is a pre-planned, but changing, set of regression test cases for each major piece of functionality. These regression suites are based on critical functionality in each feature area. For instance, the e-commerce application will have minor test cases such as display and layout, but it'll also have major cases like associating the correct product ID, description and cost. A simple rule of thumb is to consider each piece of functionality in a feature and ask, "If this were broken, would we roll back a release?" A good test organization will use this approach to identify up front what the most important cases are. The team will then document these cases so there's no confusion when it comes time to execute the regression pass. A stellar team will invest in robust, flexible automation which can execute this regression pass on a frequent and regular basis, ensuring the application code base stays stable and at a consistent releasable quality.
In a nutshell, retesting is a limited, scoped effort which focuses on individual changes introduced by defect fixing. Regression testing is a broad effort which validates overall application functionality after a major development effort has introduced new features.
Dig Deeper on Topics Archive
Related Q&A from John Overbaugh
Learn what's behind AWS outages and how to fix failures before they happen. Continue Reading
Learn strategies for best security test strategies for SaaS cloud. Continue Reading
Expert John Overbaugh identifies the three top concerns of the test manager and offers advice on how to stay ahead of the curve when it comes to ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.