Use cases have become a popular requirements development technique. Focusing on users and their goals, rather than on product features, improves the chances of developing a software package that truly meets customer needs. However, a mystique has grown up around use cases, and many organizations struggle to use them successfully. Here are several traps to avoid as your requirements analysts begin to apply the use case technique.
Trap #1: Use cases that users don't understand
Use cases are a way to represent user requirements, which describe what the user needs to be able to do with the product. Use cases should focus on tasks a user needs to accomplish with the help of the system, so they should relate to the user's business processes. Your users should be able to read and review use cases to find possible problems, such as missing alternative flows or incorrectly handled exceptions. If users cannot relate to use cases, there's a problem. Perhaps they're written too much from a technical, rather than business, perspective.
Trap #2: Too many use cases
When analysts are generating scores or hundreds of use cases, something's wrong. This normally means that the use cases are written at too low an abstraction level. Each use case should be general enough to cover several related scenarios on a common theme. Some of these will be success scenarios, and others represent conditions in which the use case might not succeed -- exceptions. If you are caught in a use case explosion, try moving up the abstraction level to group together similar use cases, treating them as alternative flows of a single, more abstract use case.
Trap #3: Overly complex use cases
The general guideline is that the number of steps in the normal flow of a use case should not exceed approximately a dozen. I once read a use case with nearly 50 steps in the normal flow. The problem was that the "normal flow" included many branching possibilities and error conditions that could arise, along with how to handle them. That is, the normal flow actually included alternative flows and exceptions. A better strategy is to pick a simple, default, well-behaved path through the use case and call that the normal flow. Then write separate alternative flows to cover variations on this and exception flows to describe the error conditions. This gives you a use case with multiple small packages of information, which is much easier to understand and manage than a huge use case that tries to handle every possibility in a single flow description.
Trap #4: Describing specific user interface elements and actions
Write "essential" use cases that describe the interactions between the user and the system at an abstract level, without incorporating user interface specifics. The use case description should not include a screen design, although simple user interface prototypes can be valuable to facilitate the use case exploration. I don't even like to hear terminology in the use case that alludes to specific user interface controls. Saying "User clicks on OK" implies a GUI interface using a mouse and buttons. But what if it would make more sense to use a touch screen or speech recognition interface? Imposing premature design constraints in the use case can lead to a suboptimal design, unless you're adding a new capability to an existing application where the screens already exist.
Trap #5: Not using other requirement models
Analysts who begin employing use cases sometimes seem to forget everything else they know about requirements specification. Use cases are a great aid for exploring requirements for interactive systems, kiosks and Web sites. However, they do not work as well for event-driven real-time systems, data warehouses or batch processes.
Avoid the temptation to force fit all of your functional requirements into use cases. Supplement the use case descriptions with a detailed list of functional requirements, nonfunctional requirements, graphical analysis models, prototypes, a data dictionary and other representations of requirements information. Use cases are valuable in many situations, but add them to your analyst toolkit instead of replacing your current tools with them.
About the author: Karl E. Wiegers is Principal Consultant with Process Impact in Portland, Ore. He is also the author of More About Software Requirements: Thorny Issues and Practical Advice, Software Requirements, 2nd Edition; Peer Reviews in Software: A Practical Guide; and Creating a Software Engineering Culture.