Does your team ever suffer from these kinds of problems?
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
- Delivered systems are completely lacking entire features.
- Debugging problems in the field can be problematic, because the folks in the field are running an older version of the software.
- Pushing changes to older versions of the software can be problematic because there is only one "branch" of the code.
- There are multiple branches, but tracking the branching (and merging changes) is exceedingly painful.
- Building and deploying a release is a multi-step, perhaps buggy process involving lots of command-line commands.
- Testing upgrades and migrations of the software is challenging, or customers have bugs in production with upgrades.
- Projects have "scope creep" that does not result in new budget, time, or changes to the test plan.
If your answer is no, well, you're good. You have your configuration management well under control.
On the other hand, if your answer is yes, well, read on.
Configuration management in a nutshell
The simplest way I can find to explain configuration management is to call it the set of tools and practices designed to make sure all the components of the software integrate properly in the right versions at the right time. In other words, it's a solution for the problems outlined above. (For a more comprehensive explanation, consider my other article "Defining configuration management.")
Notice that it is a set of practices and tools. These new practices address very real risks, but they can also add expense, time and overhead into your process. In this article, I will focus on some concrete, specific things you can do to implement "Just enough configuration management."
In the examples below, I assume your software is "built" and "shipped" to customers. If you are developing internal software or, say, a website that is only shipped to one place, you likely won't have some of the problems above and won't need the solutions below. As with anything, let your good judgment prevail.
A fistful of CM techniques
Version control. Open source tools like CVS, SVN, and GIT offer high-quality ways to track versions of your software, while some commercial tools offer support, work with binary files, or may be more accessible to less technical users. In any event, unless you want a full-time "code librarian" managing zip files with dates, you likely want some version control system. For that matter, get the analysts and testers to check in their work right alongside the code, making it versioned as well, and you can prevent the problem of using the wrong test script for the wrong build.
A branching strategy. If you want to "tag" a version of the software for release, or want to go do work on an older version of the code without checking in the change to the new version (or vice versa), you'll likely want a branching strategy. Most teams have version control and branching strategies down pat, but do your testers and analysts understand them? If they did, what power would it give them?(You may also want a tag-and-promote strategy to label release builds.)
Push-button builds. If it takes one command to build the software and put the compiled code out somewhere, ready for install, then you can "set it and forget it." If it takes more than one command, well, then you've got somebody who needs to "baby-sit" the build, and you introduce the possibility of error. Ant is a popular open-source build tool.
Automatic integration. Once your team has push-button builds, consider building a tool that re-builds the software periodically. You might do a build, wait an hour, then, if a check-in has occurred, build again; you might do it more often. This software should notice if the build fails and tell someone about it, perhaps by email. Most continuous integration, or CI tools have a page you can go to in order to check build status. They generally also support a plugin architecture to do any automated testing after the build completes. The most common approach to CI is to run all unit tests for that version immediately after each build, but some teams also run customer-facing automated tests.
Push-button deploy. I've worked in organizations where, when the build finishes, we need to do file a request to do a database refresh and wait two days. I have also worked in organizations where, when the build finishes, I ran a one-line command script to create a new Webserver living inside a virtual appliance sever, then waited ten minutes. The second one was better.
Push-button update and restore. Getting a vanilla build is great, but we don't want the team to have to rebuild the same scenarios again and again. With save/restore update, you can do a vanilla build and restore to that state, or update a "dirty" build at version 2.4 to 2.5. These hooks are also customer-facing features: Suddenly you've implemented backup/restore and automatic update!
Process, reviews and audits. Process is a wrapper around tools, and, sometimes, defined process isn't followed You may need someone to periodically check to make sure the process is followed, marking deficiencies and taking corrective action. Audits do not have to be a formal, boring, expensive process; they can be as simple as Management By Walking Around.
How to get there
All of these fixes imply extra work; they will all require a sort of mini-project to get done. Worse, most of them are things that end customers can't see and have a hard time understanding. In my experience, when deadlines are tight, "side projects" rarely get time or attention.
I've seen three ways to get that time. You might assume the work is "just something professionals do," part of the natural process of sharpening the saw. Or, your department might dedicate some percentage of time to infrastructure work. Finally, you could work to convince the business of the value in configuration management and have them "pay" for the story like they might any other feature.
Of the three, I prefer to get the business to pay for CM. The other approaches "add in" additional costs the business does not understand. Yet being able to deploy and debug in one step, restore to a known data point quickly, having unit tests trip defects soon as possible -- these things add concrete value to the business; they increase the amount of software the team can build and decrease the risk of delivering the wrong thing. That means they do have direct value for whoever is cutting the pay checks to get the work done. If you can explain that value to the client, they can prioritize that work against features, bug fixes, and other work.
Pick the biggest pain points on the list; compare the cost of fixing them to the value of the fix. Pick one. Find a way to justify the work, as professional, as infrastructure work, or by convincing the business of the value. Get the fix scheduled and done; repeat until additional fixes offer more cost than value.
Good luck. It may be tough out there, but if everything was perfect, we improvement folks wouldn't have a gig, now would we?