By Elisa Gabbert, Associate Site Editor
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
When does it make sense to use an automated software testing tool over a manual tool? What do you stand to gain by using automated software testing tools? When is automated software testing a bad idea? Are any categories of automated testing tools still too
bleeding-edge to adopt?
If you've ever asked yourself these questions, you've come to the right place. We asked five software quality and testing experts to provide advice and best practices on when to use manual versus automated software testing tools for a software testing project.
When should you use an automated software testing tool?
"'Should we use an automated tool?' is a question that usually arises whenever a QA analyst or IT firm is frustrated by the length of time it takes to finish projects and the overall cost that comes with the tools it's currently using," said John Scarpino, director of quality assurance and a university instructor in Pittsburgh, Pa. Cost, time and quality are not independent, he continued. "You cannot be successful with one unless you are successful with the other two. Of course, the goal is always to decrease costs and the time needed to do the job while maintaining a quality outcome," he said. Can automated software testing tools help you achieve that goal?
According to John Overbaugh, a senior SDET lead at Microsoft, "It only makes sense to use automated testing tools when the costs of acquiring the tool and building and maintaining the tests is less than the efficiency gained from the effort."
"Generally, this payoff comes in two forms," Overbaugh said. One form of payoff occurs when "executing the test manually is very difficult (as is the case with performance testing, where testers must simulate hundreds or thousands of concurrent users) or complex cases where setting up information is tedious or time consuming." A second payoff comes "when tests need to be run many, many times, for instance in a continuous-integration project where build verification tests (BVT) need to be run throughout the day or in a project which will release many times."
Certain kinds of tests are more appropriate for automated testing than others. "Perhaps the simplest opportunity is when a manual testing activity has become tedious and repetitive," said Kevlin Henney, an independent consultant and trainer. "When testing requires a methodical and repeated execution, that is better offered by machine than human. Humans are good explorers, but compared to computers they are incredibly poor at executing loops. If a testing activity appears to have become deskilled, use a machine to its best capabilities rather than making a monkey of human testers."
Code-facing tests are another good choice for automation. "If tests are code-facing, as opposed to testing a whole product, these should be automated. The code will be used by other code, so this is how it should be tested," Henney said. "Make such code-facing tests part of the build process. If such tests are driven manually, they will fail to clearly nail down the programmatic interface, and the tests have little value as project assets."
Mike Kelly, a software development manager for a Fortune 100 company, said that in some cases, automation is the only way to go. "Sometimes the only way a test can be executed is via some sort of automation. Examples include load testing and traversing large amounts of paths through an application. Some things just can't practically be done manually," Kelly said.
Kelly also looks to automation "to solve trivial problems or tests" that his team executes frequently, such as smoke tests and repetitive verification of the same requirement in multiple places. Regression testing is also a good candidate for automation. "I'll look to automation for functional regression when I know that there's a relatively low cost to script maintenance and likely a high value on return when a regression issue is encountered," he said.
Once you've determined that using an automated software testing tool makes sense, you should implement the tool in the high-risk areas first, according to Scarpino. "These are the applications that tend to be the most difficult to install and maintain, and are usually the high-traffic applications that see a lot of wear and tear. Applications that are used only on occasion tend to be less affected by the switch to automation, and therefore should not be a priority," he said.
When should you use a manual software testing tool?
Using automated testing tools is a bad idea if you're not yet an expert at testing, according to Kelly. "I often don't think automation makes sense if you aren't first good at testing. I see a lot of teams focus on automation, as a cost-lowering technique, when they really need to focus first on testing the right things," he said. "Once a team is good at managing testing risk and test coverage and applying the right testing techniques, then talking about automation makes sense."
Overbaugh agrees that automated testing requires experience and knowledge. If necessary, seek outside help. "Automated testing doesn't make sense when a test team does not have the expertise to automate correctly. Spending days and weeks learning how to automate, making mistakes, writing brittle automation that will only work once, and coming out of the project without any usable tests is a huge waste of time," he said. "It's definitely important to bring in experienced talent when your test team is learning to automate -- either hire a more technical test engineer with experience, or bring in an experienced automation designer as a consultant for a short period of time. Do not reinvent the wheel -- it's a costly process."
Of course, cost and time should also be factors in your decision to use a manual vs. an automated tool. "It doesn't always make sense [to use automation] if the cost of maintaining the automation will be high. For example, GUI-level automation is often more expensive than it's worth, but certainly that's not always the case," Kelly said.
"It does not make sense to use automated testing tools if, during analysis, it is found that the time needed to create, maintain and run the scripts exceeds the time allotted to conduct quality testing of the application," Scarpino said. "Reviewing the rewards of cost, time and quality is again very important to look at for the creation of manual tests."
Overbaugh echoed those sentiments, and advised taking the length of the project into account. "Automated testing doesn't make sense on short-term projects where the expense of setting up automation exceeds the value," he said. "I have been involved in small projects that added a minor feature set with little or no code shared with other portions of the project and which would, in all likelihood, never be iterated on." Such projects can be completed "with a relatively straightforward implementation."
There are definitely types of testing that humans are still best at, according to Henney, and which aren't good candidates for automation. "Exploratory testing is [an area] where humans still retain an edge. Likewise, if you are testing for something that is inherently a human-perceived quality, such as usability, you need humans in the picture," he said. Machines, on the other hand, "are great at faultless repetition, but less effective at exploring and following hunches."
Which bleeding-edge automated testing tools are still too risky?
Asked if any category of automated testing tool is still too bleeding-edge to adopt safely, Scarpino answered, "Unit testing automating tools. I believe that testing conducted by a software developer is too risky."
According to Overbaugh, the risk depends on "the experience and technical ability in your team," adding that he's still wary of some open source tools. "Emerging open source tools (no, I'm not going to offend anyone by naming names) might be a bit on the risky side, but if you have technical testers who can research and debug issues, you can work around those problems," he said.
Overbaugh also advised against adopting "monolithic test frameworks built by big companies" -- such frameworks "force teams into following their processes." Due to their expense and inflexibility, "I'm not convinced they are worth the investment," he said.
Kelly was less cautious. "As for bleeding-edge tools, I think it's all fair game," he said. "I often find that I'm writing automated tests from scratch in Ruby or Java, but for the bulk of the automation work there are many great tools available (both commercially and open source)."
Other considerations for choosing manual vs. automated testing tools
If time and cost are tight constraints, there's a new methodology in automation that is growing in popularity: "One emerging trend in test automation is to develop just-in-time, barebones test automation frameworks called 'ultra-light test harness,'" Overbaugh said. "Rather than building a huge battleship that comes sailing into port about a month after the real project is complete, build something as small as possible, that just barely gets the job done. Something that takes an hour or two to build, rather than days or weeks of design and implementation (or that costs six-figures!)."
He recommends the JUnit family of test harnesses as a foundation for this method. "Add to it a quick solution to automating, such as White for application automation or Selenium RC for Web application automation, and you have a very easily implemented automation solution."
Another consideration is when to develop an automated testing tool in-house and when to go with a third-party option. "Defect tracking tools are one type of tool that is often developed internally, typically by a project manager as self defense to manage the plethora of defect-fixing mini-projects," said Robin Goldsmith, president of consulting firm Go Pro Management Inc. in Needham, Mass. "In such cases, it's common for an organization to have multiple inconsistent, nonintegrated, underfeatured and undersupported defect tracking systems. I recommend replacing them all with one of the more than 40 typically reasonably priced commercial products or a suitable open source offering."
As another caveat, Goldsmith warned that some automated software testing tools are aimed at developers rather than testers. "One type is code-based and executes branches and logic paths from a structural white box perspective. These require little added effort to use and often claim to offer 100% unit test coverage -- but that can lead to illusory overconfidence in the tools," he said.
Overbaugh offered a final bit of advice for software testers implementing automation. He said that one of the keys to successful (and cost-efficient) implementation is "abstracting your test code from your execution layer."
"For the purpose of discussion, let's call the software that queues tests, runs them and gathers results a test harness. The code that interfaces with the system under test is called a test framework, and the code that contains the test is the test code. When building that framework, you want to shim out as many aspects of the product as possible (this is called abstraction). By doing so, you limit your exposure to changes in the product and the UI. If you have one abstraction layer per page, and you have five hundred tests calling that abstraction layer, a change to the page requires a change in one place rather than five hundred places. Infinitely less expensive!," he explained.
Executive editor Jan Stafford contributed to the creation of this feature.
Still got questions about when to use manual vs. automated software testing tools? Email us at firstname.lastname@example.org and we'll do our best to get your questions answered.
The biggest difference in smoke versus regression testing is the depth and scope of the test -- or how far it goes through the application.
Dig Deeper on Automated Software Testing