Imagine a parallel universe where idealized development occurs. All features selected for a software development project are reasonable, stable, and free of both contradiction and risk. Each stage of development fully delivers all the features that were planned for it. Thus, estimates are perfect predictions, and there are no surprises.
Returning to our universe, reality is played out slightly differently. Software development is a multivariable challenge. Estimates are estimates, not predictions, and there are many surprises. Iterative and incremental development is motivated by a need to elicit feedback, reduce uncertainty and offset the common tendency to underestimate large tasks.
Prioritizing business value
The focus of an iteration and the content of its corresponding increment are often described in terms of priority. Although one aim of an incremental approach is to reduce the time to value, priority is not normally a measure of urgency (although it is sometimes mistaken as that). Priority is considered to be a measure of value, so that the time to value in development describes when a piece of software reaches a state that can be said to be useful -- ideally before it is actually considered complete.
Even if stakeholders insist, "But they're all high-priority requirements!" not all requirements will have the same value. If everything is important, then nothing is important. It therefore makes sense to focus development effort on the higher-priority items ahead of the lower-priority ones. Priority is typically rated on a simple scale (e.g., high to low, 1 to 5, MoSCoW), but in principle it can be with respect to a global ordering of all requirements -- this is trickier, but such relative placing can unblock stakeholders who insist that everything is high priority.
But what is the role of iteration from this point of view? Isn't development now simply a matter of working through the backlog of requirements in order of descending priority? Why do we need iterations? Because there are surprises, because we need to be able to get feedback on progress and on substance, and because priorities are dynamic rather than fixed.
For an iterative lifecycle, time is quantized: It comes in boxes. To be precise, iterations should be treated as timeboxes to allow a regular development heartbeat against which feedback and learning become meaningful. Iterations should ideally all have the same duration, although there is obviously a case for changing the duration if a team reckons iterations are either too short or too long. Although, somewhat counterintuitive, when a team reckons iterations are not long enough, they probably need to consider making them shorter rather than longer so as to increase feedback rather than increment scope.
As for the ideal iteration length, advice varies, but the general consensus appears to be between one and six weeks, with around two and four weeks (or one month) being the most popular. XP2 describes a weekly cycle. Classic Scrum favors 30 calendar days per sprint. The Eclipse Foundation uses six-week iterations. Any longer than six weeks and it is likely that the sense of rhythm diminishes, focus wanders and feedback becomes less visible. In some cases, particularly for short projects, the question of iteration length should also take into account the number of iterations. A four-month project is probably better off with eight or nine two-week iterations than four monthly iterations.
Iterations with elastic rather than fixed deadlines are less likely to give useful feedback on how much progress a team is making. They are also likely to mess up the comfortable rhythm of demonstrations and kick-off meetings, which often involve arrangements with stakeholders whose diaries are already filled with meetings and uncertainty. Elastic iteration end dates dilute the meaning and significance of iterations, becoming another abstract milestone in the diary that just comes and goes.
Although it is popular for teams to claim to be practicing agile development, many fail to pass even the simplest test of iterative development. The first part of the Nokia Test for Scrum, for example, focuses on iterative development. It includes the stipulation that a team must be using iterations timeboxed to less than six weeks in duration.
Accounting for risk
Given the importance of business value, does this mean that the ideal sequence of development through iterations is defined solely in terms of business value? This may be the ideal approach, but it is not necessarily the most practical one. It suffers from the potential failure mode of having a team become "customer-driven," which is often a euphemism for a form of headless-chicken development where there is no clear vision of what needs to be built or how: The customer keeps demanding features, and the development team races to catch up. The resulting development is risky and messy, and with the possible exception of résumé writing by team members, there is very little about it that can be described as agile.
As a principal driver business value is a sound one, but it should not be the only driver. It needs to be tempered with a sense of responsibility and appropriate wisdom. One aspect of appropriate wisdom is the understanding of business value. Although business-focused stakeholders are in a better position to assess this than technically focused developers, that does not make them instant experts. There is still scope for learning when it comes to understanding the relationship between business value and software features. I still find many business stakeholders who fail to distinguish between want and need. And it is all too easy to become distracted by the pursuit of detailed features to the exclusion of the bigger picture, the actual desired outcomes.
Another aspect of appropriate wisdom is development skill. The development team is in the best position to assess development implications and duration, but this still takes learning. The most relevant form of learning in this context is related to risk. Risk is related to uncertainty, not just hard work or complexity -- something can be known to involve a lot of work, but that it takes time and effort is unsurprising. Risk is about having something jeopardize the viability of delivery, product or company. Uncertainty can arise from any one of the question words: how (technology, architecture), what (requirements), why (business case, individual motivation), when (change, delivery), who (developers, stakeholder roles and responsibilities), and where (offshore, near-shore, in-house).
In this sense, much risk relates to things that we don't know, and one of the best ways of dealing responsibly with things we don't know is to learn about them. Much risk can be reduced by taking advantage of the feedback-supported learning cycle inherent in iterative development. Of course, there are risks that are beyond a team or a company's control, but being unable to distinguish these is perhaps an even greater risk.
Uncertainty should decrease with the passage of time (knowing something is done), but it will increase with displacement in time (detailed planning far in advance). It is this issue of displacement that helps to explain why a waterfall process tends to accumulate risk towards its tail end rather than reducing it near the beginning, in spite of its aim to do just the opposite. Risk needs to be followed through, not simply identified and talked about. Given this, it makes sense that an iteration should offer not only an increment or a clarification but also an identifiable reduction in risk and uncertainty.
However, using risk as a development driver is not about ignoring business value. It is complementary rather than contradictory. It can help to introduce a more meaningful partial ordering over features that are otherwise prioritized as equal. Risk can be pulled forward from the end of the development cycle. On the other hand, being risk-driven does not necessarily mean attacking all risk up front. It may make sense to look for some easy wins early on, giving a team an opportunity to gel and get in the swing of the project, before tackling some of the riskier items. There are other reasons to defer certain risks, but the take home message is that without making risk an explicit consideration in development it is likely that it will become an implicit problem.
But beware of false risks. For example, big use cases are an artificial risk. You should be able to single out specific and nameable scenarios or split use cases with goals that can be separated into distinct and disjoint goals. Use cases with a laundry list of variations and large bodies of description are worth rethinking -- go back and recast them using index cards. The risk being addressed is easily handled by restating the problem; it is not a risk related to the solution or the business.
About the author: Kevlin Henney is an independent consultant and trainer based in the UK. His work focuses on software architecture, patterns, development process and programming languages. He is a coauthor of A Pattern Language for Distributed Computing and On Patterns and Pattern Languages, two recent volumes in the Pattern-Oriented Software Architecture series. You may contact him at firstname.lastname@example.org.