Manage Learn to apply best practices and optimize your operations.

Acceptance test-driven development explained

Acceptance test driven development brings developers, testers and business together to sort out bugs before coding starts, according to a new book.

Known by a variety of names, acceptance test driven development is often misunderstood. In his new book, ATDD by Example: A Practical Guide to Acceptance Test-Driven Development, Agile expert and author Markus Gärtner explains ATDD for the novice with easy-to-read case studies that demonstrate how ATDD is used to accurately reflect business requirements and catch defects before the first line of code has been written.

What is ATDD?

Acceptance test driven development (ATDD), similar to TDD (test-driven development), is an approach in which tests (in this case acceptance tests) help drive the design of a product. By having the whole team involved in a discussion of acceptance criteria, requirements are better understood and clarified before the code is designed or written.

Perhaps part of the confusion with ATDD is that different terms have been used in the software industry to describe this approach, including, but not limited to the following:

  • Acceptance test-driven development
  • Behavior-driven development
  • Specification by example
  • Agile acceptance testing
  • Story testing

The specification workshop

ATDD starts with the specification workshop -- a meeting held with development, test and the business expert -- to make sure requirements are well understood. Gärtner's book uses two case studies. The first is an example of an online application that calculates airport parking rates. Gärtner takes us through the discussion between team members during the specification workshop in which the requirements of an airport parking price calculation application are discussed.

The discussion starts with the business expert describing the business rules associated with pricing. The team asks a lot of clarifying questions and creates tables with different scenarios to ensure that they have a thorough understanding of parking rates for various situations.  During the course of the discussion, boundary conditions are clarified and unusual situations are explored. Asking these questions and getting answers from the business expert early helps prevent defects that might have resulted from ambiguity. By the end of their meeting, the team has five tables describing parking duration and associated costs for the various lots that would become the basis for their automated tests.

Gärtner advises teams to use specification workshops to discuss stories that will be worked in the upcoming iteration. He tells teams to be respectful of their business expert's time by avoiding talk of the latest technology and remaining focused on clarifying requirements.


In the next section of Gärtner's book, he carries this example forward, describing how the tester uses the output from the specification workshop to create automated tests using Cucumber and Ruby in combination with Selenium.

Cucumber is a testing framework tool that helps to tie the data from the example to the application under testing. There is "glue code" that provides necessary step definitions, support code and third-party libraries such as Selenium. Selenium drives a browser and can interact with the actual Web pages of the online application that's being tested -- in this case the airport parking calculator. Gärtner's book includes actual code snippets from Cucumber in which features are described for each set of tests, as well as snippets that describe the support code for setting up the Selenium client.

Next, the tester pairs with the developer to implement and automate the first test. They recognize that while the tests are based on the duration that a car is parked, the actual GUI requires a start time and end time and duration must be calculated that way. By collaborating, they are able to get the code in place that connects the test case to the application under test.  Once that first test is operational, it's an easy task to reuse and modify as needed for the other test scenarios.

In the book's second case study about a traffic light software system, FitNesse is used as the test automation tool of choice, and again Gärtner provides code snippets so readers are able to get a detailed look at the automation implementation.


Gärtner points out the importance of collaboration, first at the specification workshop between all team members, and later between the tester and developer implementing an automated solution. Even if the tester has no knowledge of programming or the tests are not automated, there is benefit in using the data coming out of the specification workshop as a communication device to ensure all parties are clear on the conditions and business logic.


With ATDD, the entire team collaborates to produce testable requirements leading to higher- quality software. Gärtner's book steps through two case studies with in-depth examples, describing how the discussion during the specification workshop will clarify requirements and provide data for test. From there, tester and developer can automate using tools such as Cucumber or FitNesse to connect the data to the application under testing.

Regardless of the tool or technology used for the project, all teams will benefit by collaborating with their business experts early, discussing the various scenarios of their application and viewing it from the perspective of testability. Doing so will undoubtedly result in a higher-quality application.

Want more? Gärtner has provided the source code available for the two case studies:

Send questions or comments about the book to Gärtner via the agile testing mailing list.

ATDD by Example: A Practical Guide to Acceptance Test-Driven Development by Markus Gärtner is published by Pearson/Addison-Wesley Professional, June 2012, ISBN 0321784154, copyright 2013 Pearson Education Inc. For more info, please visit the publisher's website.

Dig Deeper on Topics Archive