After years of tools and ideas to manage the application lifecycle, as an industry, we've learned a thing or two....
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
This tip will provide some advice for ALM implementations, including what ALM can do for your business, what it can't do and some common pitfalls.
Core components of ALM
Application lifecycle management (ALM) needs to achieve specific purposes. It needs to (1) track all the potential work, (2) define a process to approve the work, (3) provide workflow tools to automate processes, and (4) produce artifacts to track what was deployed when, where and how.
To accomplish those four tasks and actually create the software, you will need tools, including version control, IDEs, programming and test support. In order to manage the work at a higher level, the company will likely want higher level tools for project management, portfolio management, and possibly business analytics, which I'll refer to as a "digital dashboard."
Yes, a dashboard. With all this data connected, an executive should be able to open up his dashboard and see progress on a project -- including what's late. If the software knows what needs to be done, and when it is getting done, the software should be able to predict the schedule, or at least predict when Joe Developer delivering feature X late will cause Bill the tester to be working on two things at once, thus causing additional delay.
Now imagine what this means as an individual contributor. Finding out what you need to work on is similar to checking your email; you open an application, go through your to-do list, and check things off as you do them. In the ideal case, you don't even need to check the box that the work is done; the software is integrated, so it can recognize when something is done and check the item off your list automatically.
Sounds a little bit like a fairy tale, doesn't it?
What ALM can actually do
ALM tools can help your team identify all the work that is going on within a codebase, and possibly within all applications under development.
This is important, and easy to skip. If you've ever felt the death-stare from senior management for failing to deliver "on time," when you know it's because of some other emergency that came from nowhere -- or felt the pain of a feature that was just plumb "forgotten" (maybe the developer remembered to write it, but the tester didn't know to test it), then you know about this pain. Getting management to see all the work allows them to actually manage it; it's hard to articulate how valuable this actually is.
If the work has a simple, prescriptive workflow, ALM tools can manage the workflow. This allows the technical staff to work from a list, track work-in-progress, identify when their piece of work is done and send it to the next person, and so on. If the work tracks what is deployed to where, it may be able to provide an inventory of deployed systems, software and versions, and possibly even generate an audit trail, that assist in both security and tracking compliance with a defined process.
The limits of ALM
Any recipe that says "cook until done" or "flavor to taste" is not using a defined process; instead, the process itself uses feedback to know when things are done. We call this an empirical process. Research is certainly empirical, and a great deal of development can be empirical.
When ALM manages workflow, it tends to do so in a defined way, and that can conflict with an empirical process. For example, classic Computer Aided Software Engineering (CASE) tools used to require that every code change trace back to requirements, but that does not allow for a developer to improve the design of existing code through refactoring. In this way ALM tools can become a bit of a straight jacket -- yet a big red "exception" button can allow staff to ignore the process entirely and throw off any measurements you have in place.
Now let's consider the humble requirements template. ALM can require the template to be filled out for work to begin, and even connect work-in-progress to a requirements template. What ALM cannot do is make sure the template is filled out well. Likewise, ALM can require a code review step to occur, but it can't ensure that a human being invests a fair amount of time thinking about the code critically. Like sometimes happens with some pilots and other renewable licenses, it's far too easy for a culture to develop a "you check off my work, and I'll check off yours" mentality.
Finally, while there is potential for ALM tools to isolate problems and breakdowns in work process, they are generally just that: potential. You'll want to ask if your ALM supports analysis for classic signs of work breakdown, like tickets bouncing back and forth without resolution, or building of work-in-progress inventory, or if it has a way to measure how often required documents are actually referred to over the life of the software. In lean terms, all of these things are waste, but few ALM solutions currently make these issues transparent.
A few pitfalls to avoid
Above I wrote, "Getting management to see all the work allows them to actually manage it." I meant all of it; the list needs to be comprehensive. If the teams have work coming in from multiple directions, and only some of the systems are counted, then metrics used to review progress will be invalid.
For that matter, does the ALM system track vacations? All-staff meetings? Team meetings? Conference calls? A dashboard can point out productivity problems, but without knowing everything in the box, it won't be able to point out the root cause of the problem.
For a complete solution, ALM tools should be integrated. In most cases, this means either adopting the tool from your current vendor, or switching your entire toolset, from requirements to test to deploy, over to a different vendor. This "switch" can create unanticipated thrashing, training and adoption issues. Worse, in some cases, the staff may be far less productive in the integrated tool set than they were in the old, "stovepipe" systems.
My advice here is to be careful. Don't just ask questions about integration; actually get a demo copy of the software and try to run a real project with it. In some cases, you might want to keep the old stovepipe systems, and track things manually or build a bridge between the systems.
Finally, if a great deal of your team's work is empirical work, work that does not follow a defined process, your ALM tools may feel more than a bit constraining. My advice here is just like the integration issue: Don't settle for an ALM tool that can work, but instead experiment, learn, and adopt an ALM tool that will work well.
To the team adopting ALM
The real promise of ALM is to automatically track all work in progress -- but first you must try and track it manually to understand your processes. And if you've struggled to track the work manually, well, you can't automate a broken process, can you?
Look at that as good news. Your ALM adoption will point out all the mess on the floor that nobody likes to talk about. In that case, the adoption will likely be more expensive and time-consuming than you'd hope, but spend the time. An expensive ALM adoption is bad, but not as bad as a cheap one that doesn't meet your needs -- you will have to live with this software for years.
Once it's done, though, getting all the work in the box out in the open so you can manage it?
That's something to look forward to.