How do you know when you're done testing a feature? In a time-boxed environment, such as Scrum, are you "done" when the iteration is over, even if you don't feel you've had time to fully test?
In Agile development, testing isn’t a separate phase, it’s tightly integrated with coding. In fact, we start development on each user story by writing business-facing tests that will tell us what to code and help us know when we’re done. Testers, business analysts and developers work with the business stakeholders to elicit examples of desired and undesired behavior for each user story and feature, and turn these into executable tests. This is called Acceptance Test-Driven Development (ATDD) or Specification by Example. The development team collaborates with the customers to decide which tests will prove a particular user story delivers what the customer expects. This will include automated functional tests, manual exploratory tests and non-functional testing such as performance or security testing. When those tests are all passing, you’re done with that user story.
User story estimates must include time for all testing activities, including test automation and manual exploratory testing. When we plan our iteration, we only plan the user stories which can be completed, including all testing activities. New Scrum teams often over-commit, planning more work than they can possibly finish. They end up in a mini-waterfall, with testing pushed to the end, and their features are not done just because the last day of the sprint arrived. This leads to a death spiral of stories dragging from one iteration to the next, with the testers unable to ever catch up.
Agile teams have an advantage – they include all the roles necessary to understand the customer needs and deliver high quality software. Diversity of skills and experiences allows Agile teams to find ways to help business stakeholders define their needs with concrete examples, and translate those examples into the tests which define “done” for each user story and each feature.
One way to avoid the “mini-waterfall” syndrome is to focus on finishing one story at a time. On my team, when testing tasks fall behind, programmers, DBAs and other team members pitch in to finish up an almost-done story rather than work on different ones. That’s one way to spread out the testing work over the whole iteration. The whole team takes responsibility for automating all regression tests, so that testers can focus on manual exploratory testing and other important activities.
We testers can always find more testing to do; we never feel “done.” However, working closely with business stakeholders, and planning adequate time for proving features are “done,” allows us to delight our customers. New Agile teams usually can’t deliver much new functionality in each iteration. But if they invest time to find ways to understand customer requirements and translate those into tests which guide development, they’ll bake in quality that allows them to do more and go faster later on.
Specification by Example: How Successful Teams Deliver the Right Software by Gojko Adzic is one excellent resource for learning how to define and achieve “doneness.”
This was first published in December 2011