Manage Learn to apply best practices and optimize your operations.

Horseshoes, hand grenades, and Acceptance Test Driven Development (ATDD)

The best way to determine how an application should function is by running tests. The results of which will dictate how the application should feel and perform. Acceptance Test-Driven Development guides application development by providing test results that move the project along in its lifecycle.

Creating software is about telling computers what to do. Usually this starts with a description of some business function written in a natural language like English. Software development is about making computers fulfill such business functions. But computers do not understand natural languages, they understand programming languages. And from the lowest level to the highest level, the best way to tell a programming language what to do is to use tests, and then to have the program fulfill the tests.

Agile testing quadrants (the left side)
Some time ago Brian Marick noted that tests are primarily "business-facing" or "technology-facing," and tests exist to either "support the team" or to "critique the product." Test driven development (TDD) at every level is about supporting the team. Unit testing is the technology-facing aspect of TDD, and acceptance testing (ATDD) is the business-facing aspect of TDD.

Tests are requirements and requirements always change
A unit test is a small piece of extra code that manipulates a small piece of production code, checking the results of such manipulation. In practice, writing a unit test first requires thinking about the design of the code. When implementing the production code to satisfy the unit test, it is not uncommon to discover that the unit test is not correct in some way, so the test has to change a bit when the actual production code is written.

This process is magnified when doing ATDD. Acceptance tests are business-facing not technology-facing, and they are tests for the requirements of the system itself, not for small pieces of code. Acceptance tests are intended to answer the question "what behavior must be in place in order for the system to be working correctly?"

Given, when, then: A formula
One popular formula for creating automated acceptance tests is to use the language of "given, when, then." Such a formula allows tests to be written in a natural language and also to be implemented in code. Here is a simple example of an acceptance test for a system that adds interest to a financial account:

Given the account contains $100.00
When one year has passed
Then the account will contain $110.00

An ATDD test would incorporate some code such that upon reading the object of each statement, the code would manipulate the system and report on the state of the system at the end of the action. So a small piece of code would read "account" and "$100.00" and put 100.00 into the account. Then the test would read "one year passed" and alter the system as if one year had passed. Then the code would read the account balance and would pass when the feature exists to make the balance be $110.00.

But things get trickier when doing ATDD for user interfaces (UI). For example, we might have a UI test like

Given user Bob is logged in
When Bob clicks the "My Account" link
Then Bob sees a balance amount of $110.00

And the test will fail until the code exists for a user to click the link and see the correct balance.

Just as when doing unit testing, designing and implementing the actual system code will sometimes require that the test be changed. In the course of creating the system, it might turn out that what should happen is that when Bob clicks the "My Account" link he should see an account history instead of his current balance.

The value of ATDD
In software projects that do not practice some sort of ATDD, the fact that Bob sees the wrong thing when he clicks "My Account" is typically discovered late in the development cycle. This often has unpleasant consequences for the budget and for the project schedule.

Just as TDD forces the programmer to consider the design of the code before writing it, ATDD forces the people working on the business aspects of the system to also consider the design of the whole system. In practice, automated acceptance tests rarely remain static after they are created. In practice, automated acceptance tests reflect the current understanding of what is to be the final state of the whole system, and that understanding alters over the course of the project. In practice, automated acceptance tests, just like unit tests, are a design tool, but they are a design tool that is business facing, and not technology facing like unit tests are.

When the system is completed, automated acceptance tests provide a warning if something later mistakenly goes wrong. A number of people in the agile community maintain that agile projects require both automated acceptance tests and also manual exploratory testing. Having a suite of automated historical acceptance tests run frequently, allows exploratory testers to focus on the new aspects of the system and the interesting aspects of the system, with some guarantee that the older parts of the system still function as they should.

Close enough is good enough
As the saying goes, "close enough only counts in horseshoes and hand grenades." In a software development project, the team's understanding of what the system should do changes over time. Automated acceptance tests reflect that understanding. Acceptance tests early in the project will be simple, like the interest calculation. Later acceptance tests will be more complex, like defining how a particular user will navigate the UI. Automated acceptance tests are always just close enough to being correct, until such time as the features being tested are complete. After that, a suite of automated acceptance tests allows the team to devote time and energy to new and interesting work, instead of always having to check that everything else still works the way it should.

About the author: Chris McMahon is a software tester and former professional bass player. His background in software testing is both deep and wide, having tested systems from mainframes to web apps, from the deepest telecom layers and life-critical software to the frothiest eye candy. Chris has been part of the greater public software testing community since about 2004, both writing about the industry and contributing to open source projects like Watir, Selenium, and FreeBSD. His recent work has been to start the process of prying software development from the cold, dead hands of manufacturing and engineering into the warm light of artistic performance. A dedicated agile telecommuter on distributed teams, Chris lives deep in the remote Four Corners area of the U.S. Luckily, he has email:

Dig Deeper on Topics Archive

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.