When I read articles about Application Lifecycle Management (ALM), I tend to feel a little bit like a character in Peanuts listening to an adult. You know:
"Wha wha wha, wha wha wha wha."
After a fair amount of digging and asking around, I am convinced the problem ain't just me. So I thought I'd take a stroll through the land of ALM and see what could be found, and yes, I ended up with this article.
Just what is ALM?
For our purposes, let's call ALM "Any tools, technologies, or techniques that attempt to connect and maintain connections between activities over the life of a piece of software – from the first glint in the glimmer of an executive's eye, through system retirement." Notice I said attempt. It turns out that many of the important facts about a software project are never written down, and that even those that are can be misinterpreted and misunderstood. The typical project is actually a collection of ideas held in the minds of the people on the project; compressing that into relevant bits of unambiguous code will work, but English is interpreted.
So ALM tools attempt to rationalize the development process, to make it appear transparent, and to connect, say, a code change to the tests that must occur to the deploy steps, and maybe even the trouble tickets associated with that change. Let me give an example:
Say we have an older application; say a claims payment systems for an insurance company. The payment system has grown over time; we use it to determine eligibility, we use it to create reports and write checks, we event extract data from it to populate our data warehouse and to provide eligibility information for our partner programs. Thus, when we make a change to the database that might impact a dozen reports and a half-dozen other systems – if we are lucky. The design documents, if they exist, are likely out of date before the software ships and, as you probably know, referring to an outdated design document could cause more problems than it solves.
When every requirements document says "do everything the software did before plus this one tweak," it can be easy to lose track of what correct behavior is. Why, I remember one company where we were required to check every document into a "document repository" yet the search functionality was broken. We referred to the tool as "the roach motel" because "documents check in, but they don't check out."
If we are stuck without a history of features, then it's likely we don't really know what correct behavior is. And if you have no history, good luck in testing. Oh, and what about test artifacts, the documents that guide our testing? Are they all in one place? Are they up to date? Do we even know how to find all of them?
In many cases, testing an old, creaky application can be very expensive. So when a request comes in for a small tweak or "maintenance fix," the team only retests the change. Perhaps they do a sort of quick inspection that the other features "mostly appear to work," but that's likely it. It may be a conscious choice and a risk management choice, but it's often not really a choice at all. The team wouldn't know how to test the entire application if they wanted to, or a single test cycle with real confidence could take months.
Application lifecycle management, then, is an umbrella term that promises to fix all these problems or, at least, make them less painful, allowing the organization to make rational choices and tying the entire application together, from requirements to design to code, test and deploy.
It's a nice theory. Sadly, a project is not the sum of all of its documents or even its code. A great deal of the context of a project lies in its participants. So, at best, ALM can sort of decrease the friction on projects by creating a more comprehensive picture. It turns out there a couple of other ways to accomplish the same thing; I describe them as cradle-to-grave, version control as ALM, and using a wiki.
Cradle to grave ALM
When people talk about a five-figure ALM tool, they likely mean the cradle-to-grave approach. In this world, we look at all of the artifacts of the project: The 'what' to build (requirements or stories), any design 'artifacts', the code itself, any test artifacts (scripts, automation, guidelines, etc.) and perhaps some technical documentation. Historically, teams store each of these in different places; some might be in word documents in a networked drive, some are in code, some are in someone's email box and a great many of them are collected in the minds of the technical staff.
With cradle-to-grave ALM, I mean a suite of tools that sits on top of all of this data, organizing it into logical buckets. Ideally, when you want to make a change to a feature, the tool creates some sort of to-do list that people can log in and check each step. Requirements approved: Check. Design Modified: Check. Code complete: Check. Story tested: FAIL. Code Compete Again: Check. You get the drift. This tool should also produce for management summary reports that talk about all that's been done, what needs to be done and what the work in progress inventory is.
The kind of ALM software needs to either monitor and plug in to various kinds of other tools, like version control, bug tracking systems, and requirements tracking systems – or else replace them with on unified platform. But how will these ALM tools work when the requirements are written down on stories that are pasted on the wall? How will it differentiate between a small change that requires minor regression testing from a major update requiring a complete retest? And will this toolset really integrate four different requirements systems, three bug trackers and two different source code systems? Oh, it'll probably work fine for .Net and Java apps and anything large and popular, but what about older and more esoteric systems? Truth be told, you'll likely be doing dual entry. This creates the possibility of error.
Version control is the ALM
A few years ago I worked with a company that organized every project into a tree structure, with a /code folder for the code /test for the tests, and /docs for the requirements , design, and technical documentation. We branched for every release and dated the documents, so to find the changes for the 2/2/07 branch, I'd look for a document named requirements-2-2-07-(changename).doc.
That's the basic idea for the version control approach to ALM. This sounds easy: "Just check in everything. Save emails as documents and check them in if you have to." Yet with story cards you might not have requirement documents at all. Also, the 'diff' feature of version control systems tends to break down when using word processing documents (like Microsoft Word), so you may have to abandon that feature or force the business people to use plain text documents. My experience is that "knowing what changed" is a huge selling point of ALM, yet business people find plain text documents extremely painful.
If you have a highly technical staff that doesn't mind dealing with plain text documents, "version control is the ALM" can be an extremely lightweight and cheap way to reap the benefits of ALM. It is unlikely that the tool will be able to capture the project idea when it is a glimmer in the eye of an executive, or provide connections from trouble tickets to code changes, or integrate with project scheduling tools, but even heavyweight ALM tools have struggled to provide this kind of insight.
The Wiki way
A third alternative for ALM is to use a wiki, or editable web page, that is version controlled with a different "page" for each release. With a wiki you can upload files, line to branches in version control, or any other file that can be accessed by a URL, including network drive locations. Wikis allow anyone to create and revise information about a project in a loosely structured, informal way, without a lot of heavyweight signoffs and steps. You can even express stories, requirements, and acceptance criteria directly, as wiki web pages.
Now, if the software components continue to be stored in different places, you will either have to store a "pointer" to the live information or do dual entry; but at least it will be fast. Better still, wikis can be cheap. Many vendors have a hosted option with no hardware requirements and no software to install, and some have free trials or free for up to 50 users. If you've got hardware to use and staff available to administer them, several wikis have free open source implementations.
Which model of ALM is right?
I've presented three different ways to do ALM: cradle-to-grave via orchestration software; check-everything-into-version-control; or to use a wiki as a lightweight tool.
Which one is right?
Well, I guess it depends upon what problem you are trying to solve.
If you are working in a large organization that's grown through acquisition and has systems strewn all over the place and no discipline, a cradle-to-grave ALM solution might be easiest to implement.
In my opinion, however, this situation calls for more discipline in an organization, rather than being enabled by a do-it-for-you toolset. I think that a strong, central technical team looking to connect other components into a main codebase might be able to set up version control to have folders named /code, /docs, /tests and so on. The team would check every document in. Most version control systems even come with pretty graphical front ends for non-technical users. In my opinion, this type of team just needs a single place to go with up-to-date information about the project, and a wiki might be the easiest place to start.
The challenge is to find a solution the whole team will really try, one that causes a minimal change in work habits and ideally allows the team members to work the way they want. I'm not saying it will be easy or it will be free; but a hardworking, honest team putting in the effort might just be able to move past "the roach motel" and into a place in which there's less pain and more fun to work.
I don't think ALM has to take a seven-figure application suite purchase that creates an entire administrative team to run – and that gets you a promotion for "managing." Good ALM might just take a strategy, some team buy-in and a little elbow grease. And, if your team regularly experiences confusion and pain on projects, it might just be worth it.
About the author: Matt Heusser is a technical staff member of SocialText, which he joined in 2008 after 11 years of developing, testing and/or managing software projects. He teaches informaton systems courses at Calvin College and is the original lead organizer of the Great Lakes Software Excellence Conference, now in it's fourth year. He writes about the dynamics of testing and development on his blog,Creative Chaos.