Olivier Le Moal - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Do you have to choose between smoke and regression tests?

Software testing options abound. In the battle of smoke versus regression tests, which scenario fits best? Expert Amy Reichert explains how and when to use both for the best results.

Smoke test development and regression test development are related and similar, but with a slightly different angle. The biggest differentiating factor in smoke versus regression tests is the depth and scope of the test -- or how far it goes through the application. Does it cover multiple configurations or a single set? These are the type of questions you need to answer in order to build valuable smoke and functional regression suites. Define what your smoke tests verify first. Are they verifying the application installs and opens as expected, or do they go deeper than that?

Functional regression test cases are developed to test a full function -- or at least a piece of it -- and then recombined to test the application from end to end in various possible scenarios. Granted, most applications contain so many variances in the scenarios it's impossible to test them in a reasonable amount of time, let alone the time that's built into a release schedule. However, select the tests that represent the fullest functional cover and go into more depth than your smoke test suite for functional regression tests.

Let's say that as a quality assurance team lead or manager you have a large suite of functional regression tests but none are defined as smoke tests. If your test cases are not prioritized, then your first step is to prioritize which parts of the application are most critical. From there, you can determine which functional area each test represents. Next, look at defect statistics for the past two to three releases and see which areas those defects were found in. Determine if there are one or more areas that have more defect reports against them. Then determine if any of your existing critical tests cover those areas. Add the most critical tests to the smoke test suite. Keep the number under control and keep in mind you have to allow time for research if the tests fail. The smoke test suite should be under 30 tests for manual execution and no more than 50 for automated execution. Otherwise, add them to the functional regression suite.

It's advantageous to make the smoke and functional testing suites complement each other. In other words, they should both test the critical areas of the application but to varying depths or degrees of detail. The simpler tests need to be developed into a smoke test suite, while the more in-depth tests are added to the functional regression suite. I don't believe in superficial testing within a smoke test suite -- if the only time I have allows me to execute a smoke test suite, I really want to feel I've exercised the application. Superficial tests can easily be covered within functional regression tests. For the smoke test suite, focus on tests that provide immediate value and cover functionality the customer truly cares the most about.

Next Steps

Use MTM to simplify your regression and/or smoke testing

Automate smoke testing in your CD environment

What continuous delivery means to smoke testing

Using risk analysis for regressive testing planning


Dig Deeper on Topics Archive

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

What criteria do you use to choose between smoke and regression tests?
I guess it’s a difference of definition. My working definitions imply that smoke testing is performed after deploying to an new environment, whether dev, QA, staging, or production, to ensure that a deployment was successful, whereas regression tests are used to check and see if software has regressed to a previous state after changes have been made.

Definitions vary widely, so I guess it would be difficult to find two people who understand "regression testing" or "smoke testing" exactly the same way. I think what matters most is that the team is in agreement of what these teams mean for them.

As for criteria for selecting smoke and regression tests, I regard them as not that different in scope, but in grain. For me, smoke tests are a very small selection of powerful tests that exercise a lot of functionality, using multiple-factor-at-a-time approaches. However, in case of an unexpected result, the cause may not be immediately clear. That is ok.

Regression tests then are a larger collection of tests that exercise specific parts of the functionality in detail, using single-factor-at-a-time approaches. When something breaks, the regression tests should be able to pinpoint the issue.

I'm sure there are other definitions and approaches that might work great in their specific contexts. Will be interesting to read what others will post!
I don't choose between the two, I generally do both. A "smoke test" in my mind is a very quick manual test that I will do in the QA environment to make sure that nothing goes up in flames. 

Regression, in many cases, is well covered by automated tests, so there often isn't a ton for me to do in the way of regression. 
I treat 'smoke' test as part of full regression but done early after deployment to QA or Production. I try to do a quick check on high impact and risk area to determine if software is stable and we can continue with the regression end to end tests.
"Smoke" testing is a metaphor. Regression testing is a burden.
Really, the rationale behind both is to explore and investigate the risks.

Testing deeply and thoroughly takes long time. A quick shallow broad test may reveal if a build or targeted function are broken and not worth testing deeply. We may call it a "smoke test".

Re-running old test cases in an attempt to find problems in existing functions is often called "regression testing". But we could do so much better than that! Why not target risks?
Just a sample of risk targeting questions for one situation: http://automation-beyond.com/2015/03/03/a-few-questions-to-ask-upon-a-bug-fix/
When i read and think about questions like this I keep coming back to a primary question.  What is a regression test?   The answer better not be, every test we ran in release n plus its new features.  Regressions are an unfortunate problem in some development environments, but the strategy I just described is a hold over from more structured and failed process models.

It shouldn't really be an either or, it should be more of, when does a test pass need to happen that might qualify as a regression pass?  The answer for me almost always comes down to when risk or concern exists that is valid.  (Simply doing it always isn't an answer) 
It would be interesting to see definitions of smoke and regression tests in the article, as well as sanity tests, post-promotion tests, user story tests, acceptance tests, and so on. As I can see the rational point in the text, its context-ignorance reduces the potential value.
Since definitions vary, I’ll use my working definitions. A regression test is a test we execute in an attempt to determine if the software has regressed to a previous, buggy state. A smoke test is a test we use when we want to try to determine if a deployment to a new environment was successful. Following the working definitions, no, we don’t have to choose between the two.
Since the Smoke test development and regression test development are related and similar, but with a slightly different angle. What are the similarities in reporting methodologies between Smoke and regression? Please share if you found any reporting template. Thanks -Shanthamruthy.