By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Agile before agile was cool
Long before the Agile Manifesto existed, I was a tester on a mainframe system with a code base written in COBOL and C. One typically does not think of such systems as being particularly amenable to agile processes, but we had evolved a number of approaches that we would later recognize as being "agile" as defined by the Agile Manifesto.
One example of such an ancient agile approach was our build system. A typical mainframe commit/build/deploy cycle takes a long time. Not ours. We had a system in place such that every code commit to our (primitive) source code repository generated an email to the whole staff. We did have a dedicated build person, and that person had a lot of automation at his disposal. Upon notification that some feature or fix had been committed to the code base, our build person would kick off a build and watch the process. We had automation in place to check the compiler logs for errors and warnings, so the build person knew immediately if a compiler error occurred. And, unusual for the time, we had set up the compiler targets to be in the system test environment, so at the end of the build, we had a working test system in place with the new code.
In practice this meant that I could be testing a new feature within minutes, sometimes even within seconds, of having that feature committed, with full confidence that the compiler output had been checked for errors, and that any errors I found would be related to code. We had made the handoff from development to test as quick and efficient as possible. Had I known then what I know now, I would probably have taken the last few steps and made that process a real Continuous Integration (CI) system, but this was about a decade before eXtreme Programming (XP) would become widely known.
At one point I was testing a system that required installation on very specific hardware. This system required a smoke test before being released to the test team for further examination. The build was a slow process with many manual steps. Also, failed installations on the dedicated test hardware were destructive and required significant manual attention before the hardware was ready for another installation attempt. Between the slow build and the need for manual intervention for failed installations, we could run only three smoke tests per week.
The build person and I changed that. The build person started automating the various aspects of creating an installer for the system such that it no longer required so many manual steps. The amount of time between builds grew smaller and smaller. At the same time, I created a disk imaging system from open source components such that when an installation failed, instead of having to manually clean up the test system, I simply re-imaged the disk itself with a clean pre-install image of the test hardware. In the same time it had taken us to run three smoke tests per week, now we could run thirty smoke tests per week.
Unfortunately, this was not welcome news. When the test team reported three times per week that the build was broken, it was possible to ignore the fact that the code base was of poor quality. When the test team reported thirty times per week that the build was broken, it was impossible to ignore the fact that the code was of very poor quality. Speeding up a handoff like a smoke test is a great way to expose code quality issues.
The ideal situation for an agile team is to have everyone on the team all working together at the same time in the same room, surrounded by big visible charts and information radiators. Since everyone can see and hear everything that everyone else on the team is doing, handoffs become a fairly trivial matter. But in recent years I have been working on completely distributed remote teams. Agile distributed teams face unique challenges when it comes to handoffs.
I have seen two effective approaches to handling handoffs on remote agile teams. One approach is to use a wiki. In this situation, all of the team's stories, tasks, and other documentation are wiki pages on the same wiki. We built a custom "agile plug in" for this wiki that allowed us to manage status, progress, visibility, etc. of all of the wiki pages using tags, links, and other metadata available on the wiki platform. Everyone used the same pages tagged for the same iteration for the same release, and it was effective in reducing the overhead of handoffs. Before that we had managed each feature as a separate project, and we suffered from the lack of visibility and the expense of the handoffs as each feature was merged back into the main code base.
The other effective approach is to use a dedicated agile issue-tracking system. Atlassian, Rally, Thoughtworks and others sell such tools. In this situation all of the team's stories, tasks, testing, etc. are manifested as software "cards" on a software "board," and the team can manipulate those cards as they see fit. In this situation, each task, whether it be coding a story, making an estimate or doing a test, everything has an equal place on the board, and the whole team can watch every issue as each issue changes status.
In both of these situations, a handoff becomes as simple as changing a tag on a wiki page, or dragging a "card" to a new place on the board.
More information, better information
Every time a handoff happens, information is exchanged. The more information everyone has, the better decisions everyone can make about the work and about the process for that work. So the faster the handoffs, the more information everyone has. The more these handoffs become routine, even trivial, the more likely it is that everyone on the team shares the same information. And this, more than anything, is what makes us agile.
About the author: Chris McMahon is a software tester and former professional bass player. His background in software testing is both deep and wide, having tested systems from mainframes to web apps, from the deepest telecom layers and life-critical software to the frothiest eye candy. Chris has been part of the greater public software testing community since about 2004, both writing about the industry and contributing to open source projects like Watir, Selenium, and FreeBSD. His recent work has been to start the process of prying software development from the cold, dead hands of manufacturing and engineering into the warm light of artistic performance. A dedicated agile telecommuter on distributed teams, Chris lives deep in the remote Four Corners area of the U.S. Luckily, he has email: email@example.com.
Dig Deeper on Agile Software Development (Agile, Scrum, Extreme)