Though test-driven development (TDD) has proven that writing tests before coding can produce better designed and higher quality code, often customer requirements can be misunderstood. This can be remedied by using a technique called acceptance test-driven development (ATDD) which will have customers involved in the test design process earlier. Whether you’re a CIO, a stakeholder or a developer, ATDD is a technique that is important to understand.
What is ATDD?
ATDD is a technique aimed at helping the development team to collaborate with customers up front. By asking questions, hidden assumptions will surface and examples of desired system behavior will be revealed before the code is written. By using new test frameworks such as FIT and FitNesse, teams are able to turn examples into executable specifications that guide development.
This technique had several different names. Brian Marick blogged about “driving projects with examples.” Joshua Kerievsky described “story test-driven development.” I first heard the terms “acceptance test driven planning” and “acceptance test driven development” from Richard Watt and David-Leigh Fellowes.
Because Brian Marick’s Agile testing matrix proved so useful, I preferred the label “customer test-driven development” or CTDD. To me, this reflected the intention that we write business-facing tests that reflect customer needs. I’m not a good trend-setter, though – this term did NOT catch on!
Today, the term “acceptance test-driven development” or ATDD, which is used by expert practitioners and coaches such as Elisabeth Hendrickson, is widely used. Gojko Adzic has popularized an alternative description, “Specification by Example” or SBE. In his book of the same title, he presents case studies of over fifty teams successfully practicing this approach.
Whatever the label, many teams all around the globe are successfully collaborating with their customers to get examples and requirements, turning those into executable tests, and using those tests to produce the software that the customers really want. Many new test libraries and frameworks are available to support these efforts.
How ATDD works
Agile values direct us to avoid “analysis paralysis” and “big design up front.” However, we do need to know what customers expect for each user story before we start writing the code. Say we have the following user story:
As a retail website customer, I would like to be able to delete items out of my shopping cart so that I can check out with only the items I want.
We want the customer to tell us, via tests and examples, how she’ll know when this story is complete. We ask open-ended questions to help the customer think about the feature from multiple viewpoints. For the story to delete items out of the shopping cart, we might ask questions such as:
- Should there be a dialog to confirm a delete?
- Is there a need to save the deleted items for later?
- Can you draw a picture of how you want the delete function to look?
- What should happen if all items in the cart are deleted?
- What if the user has two sessions open with items in the cart, and deletes an item in only one of the sessions?
We draw on whiteboards (real or virtual) with our customers, and use brainstorming and visualization techniques such as mind mapping and story mapping. We express the resulting examples as executable tests, starting with the happy path, in an appropriate automated test framework. In the case of this example, we’d most likely find that this story is more of an epic, and slice it into smaller increments.
As the programmers write the code, they automate the happy path test, and we start adding different conditions, exploring different aspects of behavior, including boundary tests and negative tests. As we develop the tests and code in tiny increments, once the automated tests pass, we’ll dive deeper with exploratory testing, which will probably produce more desired and undesired behaviors represented by tests.
Automated specifications are only part of our strategy to delight our customers. We also demo each increment of the new feature to the stakeholders, and tweak the functionality as needed.
When the tests are passing, they may form part of our automated regression test suites. Once “green,” they must stay that way. If a regression failure occurs, or a test needs updating to reflect a change to production code, we stop what we’re doing and get it passing again.
A real-life example
My team had to experiment to find the “sweet spot” for providing just the right level of detail to the coders. I started out by collaborating with our product owner to write dozens of detailed tests, weeks in advance. When coding began, the programmers found that the intricately detailed tests obscured the main purpose of the story. In addition, the test design was incompatible with the code design they had evolved using TDD.
We learned to write high level tests together as a team during our iteration pre-planning and planning meetings. This gives us a shared understanding of the “big picture” for each story. Once coding begins, a tester works with a programmer to specify a basic happy-path test. Once that test passes, the tester specifies more test cases. This process is a series of tiny test-code-test iterations, building up small slices of the user story.
Collaborating on these tests may reveal conflicting ideas about how the code should work. For example, the tester expects that when the shopping cart is empty, the user is returned to the main shopping page, but the programmer expects the user to remain in the empty shopping cart. They go talk to the product owner, and perhaps other stakeholders, to decide the desired behavior. Discussions like these flush out hidden assumptions and reduce the chance for missed or misunderstood requirements.
The biggest benefits of ATDD
ATDD helps us nail the business needs on the first try, and tests become part of our short feedback loop in the form of automated regression suites. But the most important benefit of these tests has already been delivered: writing the tests requires communication among testers, programmers and business experts.
Another major plus is that the tests become “living documentation.” When you run your automated regression tests with each code check-in, you’re forced to update the tests as code is changed or added. If a business person has questions about how some feature works, we can go right to the automated tests to demonstrate the actual behavior. This type of documentation, in my experience, is far more useful than a text document that never gets updated after the code is released.
It takes a big investment of time to learn to do ATDD effectively. While your team is experimenting to find the approach and tools that work best for your situation, you’ll deliver less new functionality with each iteration. However, each user story released will meet customer expectations, and you won’t waste time with re-work. By writing the right code, supported by automated regression tests that provide quick feedback, you’ll keep your technical debt low. You’ll build a library of living documentation. Like my team, yours will deliver business value frequently, while working at a sustainable pace, and realize a huge return on your investment in ATDD.
How would implementing acceptance test-driven development benefit your organization? Email comments to email@example.com.