What is the difference between sanity testing and smoke testing when we conduct these tests on our application...
There's no scientific definition for sanity testing and smoke testing, and I'm sure someone will take issue with this answer no matter how I phrase it. Regardless, I use these terms regularly in my test management. Smoke testing, to me, is testing performed daily after a new build has been created. Sanity testing, on the other hand, probes around an application after a fix has been made. This is an extension to, not a replacement for, regression testing.
I've always envisioned the phrase 'smoke testing' getting started back in the early twentieth century with the first few car manufacturers. Once the car had been assembled, someone oiled the lifters, poured some gas in the tank, and fired it up. As the car ran, they looked for smoke where it didn't belong. In IT, smoke testing is pretty much the same. Grab the most recent build, fire it up, and run a series of very high-level tests, looking for major failures. My test organizations have all taken the same approach -- our smoke testing was broad and shallow. We probed each significant feature area, making sure it was even functional and accessible. If smoke tests passed, testers (or the infrastructure team) could invest time in deploying this latest build. Testing then continued on, with each tester pushing deeper into their feature area.
Another use of smoke testing is to probe a configuration before running a long test pass. For instance, in my performance test work, I will have a build deployed, set up my tools, and run a quick (30 seconds, 5 minutes...it depends on the size of the test) pass against everything at a low load/transaction rate. This is just to prove everything works fine. There's nothing like getting a 5-hour performance pass set up and kicked off, only to find the database is non-responsive or there's a problem with the VLAN somewhere!
The key to good smoke tests is that they are broad and shallow, and their goal is to just ensure basic functionality. They're an 'all clear' shout to the rest of the organization that they can jump in on the new build.
Sanity testing, on the other hand, is the testing I do after regressing a major fix right before release. I had a test manager who frequently referred to the things we do in test as 'healthy paranoia' and sanity testing is a perfect example. When a project is winding down to the finish, we start cutting release candidates. Test goes to work on those RCs -- it's funny, but no matter how hard we test during the development/stabilization cycle, we always seem to find bugs in RC mode. When a bug is found and accepted for fix, it's up to the test organization to regress that fix. Regression testing is a well-understood concept: it's the act of making sure a recent fix 1) fixed the problem as intended and 2) didn't cause new bugs in affected or related areas.
Sanity testing is the next stage in that process. After regressing a fix, that healthy paranoia kicks in and it's time for testers to probe the rest of the release looking for any potential downstream impacts. It's also making sure that any dependencies built appropriately (ie, if your application is split between an .exe and a few .dlls, while the bug may have been fixed in the .exe it's important to fire up each dll and ensure it built appropriately, etc.). Whereas smoke testing is generally scripted, focuses only on high-priority cases and is not intended to find low priority bugs and such, sanity testing is generally ad-hoc (unscripted), broad yet deep, and can find either high or low priority bugs. This is where experience, and a little paranoia, pays off. I have personally seen the strangest issues come up during my sanity testing, after deep regression yielded nothing.
Another definition of the term 'sanity testing' is somewhat related. When a new operating system or other core dependency shipped, my teams in the past have run some form of testing. If the dependency is low, we'd talk about these tests as 'quick sanity checks.' For instance, I used to work in Mobile Devices at Microsoft, on the ActiveSync team. There are two components to ActiveSync -- there's the desktop (or server) component, and there is the device component. If the PocketPC team made a chance to, for instance, Pocket Outlook, we would be sure to run a test pass -- if the change had little or nothing to do with actual inbound and outbound mail (say it was a fix to address book integration), we'd run 'a quick sanity pass' with feature owners validating their features. Rather than running through each and every test case, or picking a certain set of cases by priority, feature owners would simply carve out a chunk of the day and spend a few hours in focused, ad-hoc testing. The goal was to be comfortable that the changes made didn't affect our features. Sanity testing was only a viable option, however, when changes hadn't been made in our core code. If fixes were made within the Sync code, we would run a formal regression test pass -- and then sanity check other areas of our product.
- How to conduct regression tests
- Automating regression test cases
- How to define a test strategy
The biggest difference in smoke versus regression testing is the depth and scope of the test -- or how far it goes through the application. A fine line separates what defines a smoke test, or at least a smoke test worth executing, and what defines a valuable regression test.
Dig Deeper on Topics Archive
Related Q&A from John Overbaugh
Learn strategies for best security test strategies for SaaS cloud. Continue Reading
Security and security tools have become more necessary to the application lifecycle, according to recent research. In this response, expert John ... Continue Reading
Expert John Overbaugh defines security as confidentiality, integrity and availability of information across systems and applications. Read this ... Continue Reading