For example, if you're a programmer, then you likely think of a test at the unit level created using a tool like JUnit or NUnit and measured in terms of code coverage. If you're a traditional tester in a large company, you might think of a GUI-level test, created using a tool like HP QuickTest Professional or Rational Functional Tester and measured in terms of requirements coverage. And if you're one of those oddball SOA testers, you might think of a set of test scenarios based on XML request/response pairings performed at the API level using a tool like eviware SoapUI or iTKO LISA.
No matter what type of automation you're doing, you're likely primarily doing it for one of the following four reasons:
- You might be interested in creating a test bed that allows you to refactor or introduce change with confidence.
- Or perhaps you're using your tests for acceptance so they can serve as a signal for programming completion.
- Perhaps you're using your automation to test things you just can't practically test manually, so it's enabling you to get coverage otherwise not possible or it's allowing you to look for new defects that you otherwise couldn't find.
- Or perhaps you're just looking for regression defects, making sure that the riskiest parts of your system still work the way you expect them to by repeating tests that previously demonstrated the functionality worked.
I didn't include reducing testing costs in that list of primary reasons because, while many vendors may claim the reason to automate is to control testing costs, there has to be a reason you'd be running those tests in the first place. The most common example for controlling costs is the regression testing scenario. If you have 1,000 manual regression tests, wouldn't it be cheaper to run them using automation? That may or may not be true, depending on your context. The likely reason you have those tests in the first place is due to the risk of a regression defect. Thus that's the primary goal for those tests. Cost control would be the secondary, but still important, reason for automating those tests.
Programmer test automation
A number of years ago I co-hosted a workshop on the topic of unit testing, during which we identified a number of reasons why developers create unit tests and explored how unit tests help support the testing effort in general. In an article on our findings, we outlined those reasons, including providing quick feedback to the developer, simplifying the structure of the system, mitigating concerns about the effects of refactoring, validating code integration, and writing more testable and well-documented code.
Aside from their immediate value to developers, unit tests provide value to the overall testing process by creating a test harness that can be leveraged for other types of testing. They can also reduce the overall scope (coverage analysis and risk analysis) of other types of testing. And good unit tests have the potential to remove the necessity for in-depth domain testing and in-depth boundary value analysis.
Other types of test automation
Programmer test automation isn't necessarily just unit testing. The most common other type of test automation I've seen implemented by programming teams is automated acceptance testing. These tests, typically used on agile development teams, can be used to signal completion of a development phase for the programming team. They are often then used on an ongoing basis in future releases as a regression test bed.
Once you get away from programmer testing, the waters get muddied awfully fast. It's easy to get lost in the tool vendor and solution provider rhetoric around automation. Add to that the ever-changing landscape of open source automation tools and trying to figure out which projects are still active and which of those will work for you. While programmer testing may have those same challenges, other types of test automation have years of hype, literature and so-called "best practices" behind them.
If you're using your test automation to test things you just can't practically test manually (often found when testing things like performance, security or high-volume test-data scenarios or when implementing model-based coverage), then deciding to use automation becomes easy. However, if you're looking to implement test automation for other reasons, like making sure that the riskiest parts of your system still work by doing regression testing, then figuring out if automation is the right choice is more difficult.
Figuring out if your tests should be automated
I don't want to reinvent the wheel here. If you haven't read Brian Marick's "When Should a Test Be Automated?" you might want to go do that now. It's a little dated, but don't let that throw you off. It's probably one of the most complete and well-stated papers on a very difficult topic. In the paper, Marick addresses the costs of automation versus manual tests, the lifespan of any given test, and the ability for any given test to find additional bugs.
When I'm deciding what type of test automation I need for a given problem, I ask myself the following questions:
- What's the goal of my testing?
- What aspect of the application am I trying to cover with this test or set of tests?
- What risk(s) does this test address?
- What specifically am I looking for automation to solve?
- Where will this test run, and how long will I need to maintain it?
- Will someone other than me look at or maintain this code, and what (if anything) will he or she need to be able to do with it?
My answers to those questions lead me down the path of what tools I may want to use and what test frameworks I may need to be successful. Notice the general pattern that emerges in the questions I ask myself. First decide what it is you're trying to do. Then decide if that should be automated. And then decide on your tools and how you'll develop and maintain the code.
For up-to-date listings of the automation tools that are currently available, along with descriptions of what they do, I recommend the following resources:
- The best hands-on overview I've seen for getting started with unit testing is Kent Beck's Test-Driven Development: By Example.
- For a listing of unit test frameworks by development language, check the Cunningham & Cunningham wiki. The C2.com wiki also has links to other resources on the topic.
- For both unit test and traditional automation tools, OpenSourceTesting.org maintains a fantastic listing of the current open source tools available.
- The StickyMinds.com tool guide provides an up-to-date listing of products available along with links to the vendor's sites.
- Software Test and Performance magazine does an annual Tester's Choice Award, which has a category for test automation solutions along with other categories for specialized test tools.
About the author: Michael Kelly is currently the director of application development for Interactions. He also writes and speaks about topics in software testing. Kelly is a board member for the Association for Software Testing and a co-founder of the Indianapolis Workshops on Software Testing, a series of ongoing meetings on topics in software testing. You can find most of his articles and blog on his website, www.MichaelDKelly.com.
This was first published in March 2009