How user stories help to define software requirements

User stories play an important role when defining requirements and they also contribute to living documentation during the software development process. In this expert response, Lisa Crispin offers key practices for ensuring that requirements align with user stories and that user stories are accurate.

Are user stories considered all that is necessary to define a requirement? It doesn’t seem like the small amount of data in a user story is enough to be able to do accurate estimation.

Years ago, Ron Jeffries defined a user story as “a reminder to have a conversation.” We start with themes or features, break those down into increments that each can be completed in a few days, and those increments are our user stories. The process of writing user stories gives the team a chance to think about each part of the feature from several viewpoints: the business, the end user, the development team. But the story itself is merely a placeholder to start more discussions when the team starts working on the story.

Experiment with different approaches to find what works best for your team. Over the years, my team has evolved a process that helps ensure we’ll deliver what our customers want. A couple days before the start of the iteration, we have a one-hour “pre-planning” meeting with the entire development team, the product owner and sometimes other stakeholders who are “story owners.” The product owner (PO) goes through all the stories. For each story, he starts with the “why,” the purpose of the story. As he explains it and gives the specifications, we articulate the specs and high-level test cases, and someone writes those up on the whiteboard. This way, everyone on the team gets a shared understanding of the story. Often we have questions the PO can’t answer. We write those on the whiteboard with a different color marker.

When we do our sprint planning, we go through all the stories again. Having a couple of days to mull over the stories helps us think of more questions as well as design ideas for the technical implementation. After the planning meeting, we (usually the testers on the team, but sometimes programmers) write more detailed requirements and examples on the team wiki, which is organized by business functional area and cross-referenced by sprint. We go over those with the PO to make sure we’ve understood everything. Once coding starts on a story, we start writing executable tests, beginning with the “happy path.” Once the programmer has automated the fixtures and written enough code for that test to pass, we start adding more test cases, including boundary conditions and negative tests.

Once these tests pass, they become part of our automated regression suite, which turns them into living documentation. Anytime there’s a change in the code or the database, the test will fail, and we have to fix it. Our documentation is always up to date.

The collaboration on tests, which improves communication within the team and with stakeholders, and the outcome of living documentation, are the most valuable outcomes of this process.

Dig Deeper on Software development lifecycle

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close