Problem solve Get help with specific problems with your technologies, process and projects.

The difference between software retesting and regression testing

Software test consultant John Overbaugh explains the difference between retesting and regression testing in this expert response. Overbaugh uses an example of a shopping cart tax calculation bug to help explain the difference between retesting and regression testing the fix. He concludes with some valuable tips for successful regression testing.

How is retesting different from regression testing? In regression testing we retest the test case, so is regression testing a subset of retesting?
This is mostly a semantics issue. Different teams use "retesting" and "regression testing" interchangeably; I'm even guilty of confusing the two on occasion. For this ATE, let's just assume retesting is the act of validating a bug fix, whereas regression testing is the act of testing a modified application before release.

In retesting, the testing has two stages. First, there is a focused phase where the defect to be addressed is tested. For instance, if the defect is that a shopping cart calculates tax incorrectly, the retest is to address the various equivalence classes of tax calculation: in-state, out-of-state, no sales tax, multiple taxes summed, and so forth. The second phase is a wider test phase where downstream dependencies are examined-- in the shopping cart example, the downstream dependencies would be cart totals, billing, and receipt/purchase confirmation presentation. By confirming the direct functionality and any related functionality, retesting should be a smooth process with few surprises, and defect fixing should occur without negatively impacting application functionality.

Our definition of regression testing has a much broader scope: it's about retesting an entire application after new functionality has been introduced. Let's return to our e-commerce application for an example, and assume we've introduced new functionality to the application which integrates with resellers. Now our Web application needs to determine the source of an item (from our company or from a reseller), and it needs to send order fulfillment information to that source. With all the new functionality coded, tested, fixed and retested, we're nearing the release date. The final step is to ensure the new functionality plays well with all of the existing functionality. To do this, we regression test the site.

The key to successful regression testing is a pre-planned, but changing, set of regression test cases for each major piece of functionality. These regression suites are based on critical functionality in each feature area. For instance, the e-commerce application will have minor test cases such as display and layout, but it'll also have major cases like associating the correct product ID, description and cost. A simple rule of thumb is to consider each piece of functionality in a feature and ask, "If this were broken, would we roll back a release?" A good test organization will use this approach to identify up front what the most important cases are. The team will then document these cases so there's no confusion when it comes time to execute the regression pass. A stellar team will invest in robust, flexible automation which can execute this regression pass on a frequent and regular basis, ensuring the application code base stays stable and at a consistent releasable quality.

In a nutshell, retesting is a limited, scoped effort which focuses on individual changes introduced by defect fixing. Regression testing is a broad effort which validates overall application functionality after a major development effort has introduced new features.

Dig Deeper on Topics Archive

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.