One way to think about the software delivery process is as a set of translation activities: from idea to requirements,...
from requirements to design, from design to code, and from requirements to tests. That time spent translating doesn't really add value. Instead, it says the same thing in a different way.
Translations are also a bit like the telephone game was played in the school yard. Just like that game, they also lead to information loss.
You may not have exactly that sort of translation activity in your software process, but you'll often see the same redundant, repeated work in other forms, even using modern methods.
The method I describe today will find the accidental practices -- the steps seem to happen over and over again with little value -- then streamline, automate or eliminate them.
Step one: What are the technologists doing?
We'll start by looking at what the technical team is actually spending time doing. The time that they spend figuring out what to build, building that thing, and then inspecting the thing that they have built is essential to the work. The team might also be spending time on activities that don't add value to the end product, such as manually moving code to “refactor” it, building components to drive the software, doing plumbing or wiring tasks or waiting for a build or test environment to "spool" up. All of these extraneous tasks can be streamlined, eliminated or automated.
You might start with a value stream map, formally defining the process, or you could simply look for waste. Three common places to streamline are: build/deploy, writing code and building test automation.
Most software teams today have automated the checkout, build and unit test processes. They may also have automated the deployment of the build. If your team hasn't already done that, and you find yourself spending time "doing" builds, you might want to look into Jenkins or Cruise Control. Likewise, programmers working in an older development environment may take several steps to do actually achieve a build, all of which are potential for automation.
Finally, there is the process of writing business level tests, commonly known as Behavior Driven Development (BDD). The tools to do BDD are just starting to gain widespread use and mature. In most cases, human beings write something like "given that I am a customer when I enter my username and password and click submit..." They turn that structured English over to programmers. Meanwhile, the programmers need to create functions called I_Am_A_Customer, or I_Enter_My_Username and so on; these sorts of things could be automatically generated from the “test.” Some second-generation tools, like Raconteur, actually generate the stub programs automatically from the source at the touch of a button.
Step two: How do we keep our systems in sync?
Think for a moment about all of your artifacts relating to the software other than the code: documents, test cases, requirements, plans design documents, etc. The one thing they all have in common is that they do not change automatically as reality changes around them. This incurs one of two costs: the pain of a document that is misleading and out of date, or, alternatively, the cost of updating everything to keep it all current. For that matter, consider the humble software test that is up-to-date. Perhaps it explains what the software should do, but it may not explain why the software should do it.
Many experts and gurus have proposed ways to overcome this problem. You might, for example, keep everything tied together in one technology stack, and force every check-in of code to be tagged with what requirement it addresses, and there are certainly tools to help track traceability. Personally, I find that if the tools are Web-based, it is easy enough to provide links in the tests back to the stories they modify, and vice versa.
That gives us the code in version control, stories in a Web-based application, documentation in a wiki, and business-level checks in another tool, something like SpecFlow, Cucumber or Fitnesse. When people are modifying one system, they follow the links and modify the other systems as needed. (Stories we let get old; the specification is the executable tests, the documentation we decide to keep, and the knowledge held in the team.)
By keeping the artifacts light and pointing at each other, we enable a sort of lightweight Application Lifecycle Management, which you can extend outward to other areas like operations, and portfolio management.
Step three: What are people working on? Where does the work come from?
Here's a final idea: Create a single unified view of the work. This unified view covers every task that takes more than two hours, from bug fixes to maintenance and new development. Research, production support, tooling, you name it: it all goes on the board. Here is a Kanban board from one of my recent clients:
Consider the case of someone coming in with an emergency. We have to figure out what will get dropped or moved to allow the team to take on a new task. Making the work visible means you have to consciously pick what to stop working on and what deadlines to move. If you make the work visible and count the number of pieces in each step you can also find all kinds of information about what is actually going on, like where the bottlenecks are in the process and how to shift responsibilities to improve flow. If this sounds appealing, you might want to look into using a Kanban board.
From idea to improvement
This article was designed as an overview of ideas to improve the flow in your software delivery process. To recap: look at what the technologists are doing, especially the repeatable tasks that can be automated. Then look at the artifacts of the software process, and how to link them together to eliminate confusion and keep staff productive. Finally, find ways to make the entire flow of work visible.
Follow us on Twitter at @SoftwareTestTT and let us know what you thought of this article.