Tip

Using automation to accelerate software testing in Agile

Agile software development's time-boxed iterations and insistence on shipping working software puts significant pressure on the software tester. In a previous tip on testing in agile (in sidebar below) I suggested a half-dozen different approaches todifferent approaches to compressing the test schedule, and one of them was getting the computer to do repetitive tasks, a process sometimes called "test automation."

Test automation is certainly possible and often, though not always, advisable. In this tip, I will discuss some test automation issues, explain a few common approaches to automating testing, point to some automated testing tools and suggest a few places to start.

What automated testing isn't

Generally, we can't automate the entire process of software testing. For example, test design and issue documentation are usually best left to a human. Sure, there are software packages that claim to do test design. Some of them are even helpful some of the time. But as long as the limits of artificial intelligence are playing the game of 20 questions moderately well, I'm afraid we'll be stuck with designing our own test cases.

    Requires Free Membership to View

Related Content
Accelerate your agile software testing
This expert tip explains how adopting agile development and risk-driven and test-driven development can accelerate testing.

When people say "test automation," they typically mean the taking a very precise script and running it exactly the same way and checking the same values every time, essentially viewing testing as a mechanical process. But when you sit down with a tester and see what they actually do, the test script is just the start.

Good testers explore the nooks and crannies of the application. Learning while we test, good testers get their best test ideas during testing and use the script as a general guide. For example, this is what we do in other performing arts, where some of the best movie lines of all time are improvisations.

So even automatic script-execution-and-evaluation isn't "test automation" and might be more rightly called automatic checking. So while automatic checking can't replace thinking testing by a human, it can offload some of the testing burden and help compress the test effort.

The automation meme

If test automation is just checking, just part of a "balanced breakfast," why is it so popular? Well, automation is what programmers do. So every year, when bright young computer scientists graduate from the University of California at Berkeley, MIT and Carnegie Mellon, they look at testing, see the scripts and artifacts of the work and say, "Great! A straightforward business process! Let's automate it." The idea is sticky; it fits the natural paradigm in the mind of the programmers. And, 10 or 20 years later, when the majority of "agile coaches" are former programmers, the meme surfaces its head again. So yes, we keep having to have these same conversations over and over again.

So while computer-assisted test execution can be part of a "balanced breakfast", it can be done better or worse. If you are considering an agile approach and committed to automated testing, you'll want to look carefully at your approach and strategy.

Here are a few common test automation tool frameworks, tools and processes and the pros and cons of each one:

Record/playback

Back in the days of DOS, it was possible to use a terminate and stay resident (TSR) program to record your keystrokes and play them back, then periodically dump the screen to a file. Comparing the files, and noting the differences as failures was known as record/playback. We have similar programs today under windows; "Quick Test Pro" and "Test Complete" are two popular ones.

The failure of record/playback is that it records any differences as a failure. Change the width of the browser window, have a different date in the lower-left-hand corner and/or upgrade your version of Windows and record/playback tools start to register a false error. In order to be successful, you'll need to be very specific about sub-windows to capture and what elements to compare; even that will fail when the GUI changes. Most teams find they end up programming, often in a vendor-specific language much like Visual Basic. These programs can be expensive to develop and hard to maintain.

As a result, record/playback has a bit of a black eye in the software test world. I only recommend it in very specific instances, such as upgrades of ERP software where the user interface is very stable. For browser-based applications, record/playback has steadily lost ground to keyword-driven testing.

Keyword-driven tests

Keyword-driven testing, or keyword/drive, is another type of test automation framework. If record/playback fails because it captures everything, keyword-driven is the opposite because it only does exactly what you ask it to do. Here's an example:
Open this page
Click the id "numerator"
Type 12
Click the id "denominator"
Type 4
Click "Divide"
wait_for_value "The result is: 3"

The problem with keyword/drive is that you have to predict all the possible means of failure in advance. It is extremely bad at picking up rendering errors; this text appears correctly, but in the wrong place, or this element runs beyond the edge of that table. It also has a maintenance burden when the tests differ.

As a tool for the sanity-checking overnight, though, it's not bad. Most commercial tools now have keyword-driven capabilities. Worksoft Certify is a tool designed to be keyword driven from the ground up. Two notable free and open source keyword-driven tools are Selenium RC and Watir (Web Acceptance Testing in Ruby).

Behind-the-GUI testing

Driving the browser can be a slow, brittle and expensive to maintain approach. Another option is to write "plumbing" code, wiring the business logic directly into your test framework, expressing tests as English-looking tables. Fit and FitNesse are two common open source tools that enable this.

This type of testing has no hope at all of catching GUI/regression errors; but for basic business applications with a lot of business logic, such as tax software, Behind-The-GUI can be a very quick way to create a low-maintenance test suite.

Model-driven testing

Popularized by Harry Robinson, who worked with both Google and Microsoft, model-driven testing involves determining the logical potential path in an application, then taking random walks down it, selecting random inputs and checking to make sure the result is correct. Model-driven testing has been tremendously successful for very large applications with a limited number of possible states. For instance, I understand they use it in Bing, Microsoft's search engine.

The problem with model-driven testing comes with more complex apps in validating the results, as the test software essentially needs to reproduce all the business logic. We testers call this the oracle problem. Also, as the number of possible states increases, the complexity of the paths between increases geometrically.

For small programs, especially database applications with no GUI, model-driven testing can be a wonderful way to simulate thousands or millions of test conditions overnight, unattended. It generally requires a skilled developer who also has considerable customer-facing testing skills, and, sadly, those are in short supply.

Do it twice

Sometimes management says the application absolutely, positively has to work. In that case, I have had some success with having two entirely different teams write the application, then running the same input and looking at any different output as a bug somewhere. While this can not prevent requirements bugs, at least it proves that the engineering teams made a reasonable interpretation, as -- with the bad requirements --

they both did the same thing.

Faced with the price of "doing it twice," most companies I've worked with find testing relatively cheap and "good enough." I have, however, successfully used this technique, for example, to validate a complex SQL query.

Getting started with test automation

Your typical book on test automation says the effort is an investment, a development project in its own right. So the team should get the automation funded as its own project. I struggle with this. After all, the team is looking at doing automation to go faster; taking people off the project to go build a framework means going considerably slower for some time.

Plus, these projects often justified with ROI numbers. The truth is, we have no idea how much computer-assisted, or automated, testing will speed us up, as the tradeoff is not one-for-one. We also have no idea how much maintenance burden the new tests will imply. (See note below.) So any ROI number is likely an educated stab in the back, er, I mean dark.

My advice is to try to get a day or two to download one of these tools and experiment, even the commercial ones usually have a 30-day trial. If you get something that looks like it will work, get a tech story funded to start to build the tests. Likely the developers will have to write the framework or do the integration. This tech story has specific and measurable business value; enabling your team to shorten it's test cycle, thus releasing the software more often. The product owner will need to prioritize that against other deliverables. Then we communicate in terms of how long a typical test cycle is, and pick low hanging fruit in order to shorten the test cycle.

Good luck, and let me know how it goes. I'd be happy to hear from you.

Note: Test design can be done better or worse, to have more or less of a maintenance burden. I'm afraid I'll have to cover that in a different article. If you'd like me to write that article, send an email to matt.heusser@gmail.com.


About the author: Matt Heusser is a technical staff member of SocialText, which he joined in 2008 after 11 years of developing, testing and/or managing software projects. He teaches informaton systems courses at Calvin College and is the original lead organizer of the Great Lakes Software Excellence Conference, now in it's fourth year. He writes about the dynamics of testing and development on his blog, Creative Chaos.

This was first published in October 2009

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.