When you’re starting a project from scratch, it’s easy to keep your development iterations and your release cycles in sync. For example, if you work in two-week sprints, then it’s easy to say, “We deploy to production every two weeks at the end of each sprint.” If you and your team can do that, then read no further! However, if your team also has to contend with things like documentation creation (for rollout or training or support), customer communications, and complex production deployments, the ability to keep iteration and release cycles in sync becomes more difficult. In this article, we look at some strategies for dealing with these complexities.
The ever-growing number of operational-readiness tasks
When you first start a project, you typically have a small number of developers and business users, and it’s very likely the developers and users are working together to do all the necessary testing, documentation, and training. Things move quickly, and it’s very easy to get working software into production. In this well-oiled development machine, releases look like this:
Figure 1: Releases occurring at the end of each iteration.
In the iteration (represented above as sprints), you do all the work required to “complete” a story. At the end, you move that completed code into production. However, over time, you’ll find that certain tasks start to expand. Specifically, I often see the following operational support activities grow over time:
Testing activities: As the code base grows and the software becomes more and more complex, your ability to rely solely on automated unit tests for test coverage will diminish. Typically, teams add professional testers into the mix when this occurs, or they designate someone from the development team to take on that role. And for a while, that’s sustainable. But again, as things continue to grow in complexity, often it becomes impractical to keep all of that testing work within the iteration. There are a number of strategies for dealing with this, and we explored a few of the tradeoffs in the article Agile software testing strategies for managers.
- Training activities: While not applicable to all teams and products, some teams have users they need to train. In some cases that’s just an online knowledge base and some Web-based training. In other cases, it can be full-blown classes with instructors. Regardless, if your team provides training then some time has to set aside to deal with updating documents, videos, and other materials. In some cases, instructors need to be briefed and trained on the new features. The more complex the system gets over time, the more time these activities can take.
- Support/documentation activities: Similar to training, for many teams there are steps that need to be taken to brief or train internal support staff on the changes moving to production. This might also include updating internal support documentation, support workflows, or other support materials. The more new features are implemented in a given release, the more time these activities can take.
- Customer communications: Depending on the formality of the customer contract, defined service-level agreements, and the level of communication clients are used to regarding upgrades and product updates, certain teams develop commitments with regards to communications. These are sometimes simple emails or bullet-point release notes, and other times they are “baked” into the product via some release notification popup or other automated notification on login. In addition to crafting and staging these notifications, marketing teams can also benefit from crafting communications to potential customers with the new features being implemented. These announcements might also include broader promotional or public relations campaigns. All of which need to be managed and coordinated.
- Release preparation activities: Finally, for some products, deployment isn’t as simple as pushing a set of binaries or files out to a production server. In some cases, it’s fairly involved. In these cases, a single deployment can occur over the course of days or weeks and can require regular care and feeding as the team monitors how the deployment is going. Preparing for these deployments can take time and energy as the team works to make sure the final “package” is ready.
Moving operational-readiness activities into a parallel track
The most common answer to all the above ballooning operational-readiness activities is to move them into a parallel track. In this model, the work completed in iteration N, is made operationally ready in iteration N+1 at the same time new features are being developed. It can commonly look something like this:
Figure 2: Operational-readiness activities being worked concurrently in the next iteration.
In this model, it’s not uncommon for the team to split their attention, devoting some capacity to continuing to develop new features, while also supporting the transition of the previous iterations’ code out the door to production. There’s a strong desire in this model to keep as much testing and documentation updates in the originating iteration as possible. This lowers the switching cost on the development team, and often produces higher-quality results. Whatever can’t realistically be completed in the original iteration gets completed in the following release iteration as part of those operational readiness activities.
Over time, as the code base continues to grow, as features become more complex, and as the established client base grows, you’ll find that teams sometimes evolve into the following pattern of merging multiple iterations into one release:
Figure 3: Deploying multiple iterations in a simple release cycle.
While this is an unfortunately common practice for larger teams with a more “involved” problem space, it should be avoided. Taking this step removes all the advantages of working in small batches – and working in small batches is what Agile development is all about. Over time, this starts to look more and more like traditional development processes, with large bloated releases. It won’t matter any longer that the development team is working in two-week iterations, because the rest of the organization is back to monthly, quarterly, or annual releases.
Keeping releases small and frequent while dealing with complexity
Instead of pushing multiple iterations into one release, find ways to add additional process swim lanes to manage the evolving complexity. For example, if I knew that every release would require additional testing and training activities, I might choose to structure the work in the following way:
In the above model, we are still releasing production code every two weeks, but we are adding an additional two weeks of work activities to account for the work related to testing, documentation updates, and training. By creating a beta, we might also have select customers perform testing alongside our internal testers to add even more value to the work stream than we could have with releases going to production at the end of each iteration.
You don’t need to add a beta program. You don’t even need to call this new two-week process insertion a beta release. That’s just a convenient example of one possible model to add an additional swim lane. The important take away is that you’re preserving the small batch deployment, while making sure the team has enough time to get everything accomplished to make the release successful. With each new swim lane you’re adding coordination overhead and complexity, but if you really have a problem where this is the necessary solution, then you’re already dealing with that complexity. You might also choose to mix and match different activities with any new swim lanes you add.
Each product and company is unique, and likely solving a unique problem, so try a couple of different scenarios to see which structure works best for you and your team. Regardless of how you end up structuring it initially, make sure everyone involved is a part of the retrospective at the end of each iteration so you get their feedback on what worked and what didn’t before you make changes.
About the author:
Mike Kelly is a partner at DeveloperTown, a venture development firm. He regularly writes and speaks about topics in software testing and Agile development, but spends most of his time working with teams to deliver ridiculously high quality solutions faster than they could without rigorous testing practices. You can learn more about Mike and his other works on his website.