Beyond burndowns: Metrics for enterprise Agile

Executive management and stakeholders want to know the status of a project and this is traditionally done with metrics. But as our approach to software development changes, so do our measurements. In this tip, find out which metrics consultant Howard Deiner recommends for enterprise Agile organizations.

Moving your enterprise into an Agile mindset is complicated. Decision makers want (and they really need) numbers. Understanding what to track and how to track it as you move your business from a plan driven mentality into an Agile business value driven mentality can be challenging. Here are some ideas to get you started.

The difference between traditional and Agile

Traditional waterfall thinking can be summarized as “carefully plan each and every detail before you start your project, right down to each and every task’s what, when, and who.” Then, once you have the grand plan, execute on it, tracking and reporting on completion against plan. In other words, “Plan the work, then work the plan.” 

The Agile process takes exception to this mindset, reminding us right in the Agile Manifesto that we should favor “responding to change over following a plan.” That seems like sound advice for at least two reasons: 1) business requirements are always in a churn trying to chase best value versus development expense; and 2) Dwight Eisenhower, the guy responsible for planning and executing the largest amphibious landing in history, would approve (he is quoted as saying “In war, plans are nothing; planning is everything.”)

What’s wrong with the waterfall way of tracking progress against plan?

The trouble with tracking progress against a very detailed plan is two fold.

1. You just can’t know all of the requirements and tasks up front, because the devil is in the details. Things change. People change their minds. Deal with it.

2. Traditional tracking against plan measures costs, not benefits. But what we really need to measure is value delivered, and avoidance of future costs. The business needs features completed that make the best sense for right now.  Not just a plan well executed.

Then, what should I be tracking?

You have to be careful of what you measure, because “you get what you measure.”  Here are a few items that are fairly easy to measure, but have an extremely Anti-Agile smell to them:

  • KLOC (thousands of lines of code) and KLOC/developer. The thought is that the most productive developers get rewarded. But we end up rewarding the verbose coder who is gaming the system!
  • Tasks completed.  Here, we reward those who game the system by making up endless tiny nubs of tasks, and then doing them. But did we produce anything of business value from all that work?
  • Time worked on task. If we want someone to sit quietly and stand at our sides while we work on a project, we should probably employ golden retrievers. They are loyal and don’t complain very much, too. But, we need engagement, not just attendance. We think about tracking to that same end, too.

Instead, we should look for Agile metrics that:

  • Affirm and reinforce Lean and Agile principles. “Working software is the primary measure of progress.”
  • Measure outcomes, not output. Would you rather be 90% done on everything, but have nothing to show in working software, or 70% with delivered software?
  • Follow trends, not numbers. When you learn to fly an airplane, your first tendency is to watch the dials constantly. You end up never being on course and always overcorrecting. When my instructor covered up the panel and made me look outside, suddenly straight and level flight was easy. We have to avoid overcorrecting on our software projects, too.
  • Belong to a small set of metrics and diagnostics. We need to concentrate on the production of software artifacts that really matter (the code!) and have as little overhead as possible to help us stay on course.
  • Are easy to collect.  A good friend once said to me “I can report status or I can change status, but I can’t do both at once.”  We need to spend the vast majority of our time producing software, not reporting on its production.
  • Reveals, rather than conceals, its context and significant variables. Agile is all about honesty and transparency.  If we collect data to satisfy hidden agendas, we are not being true to these values.
  • Provides fuel for meaningful conversation. Metrics are merely data. We need to find trends and patterns in the data to gather information of what we are observing. We then find wisdom when we can relate that back to causation. When we apply that wisdom and improve our software development process, we have completed the desired Agile “inspect and adapt” feedback loop. But if we collect data just for the sake of collecting data, we are wasting time and attention away from the task of creating software.

What are some Agile tracking categories?

There are so many “things Agile” that can be tracked that it’s almost hard to start. But one area that everyone knows of, and is good (but not everything) is:

Velocity and Burn-down

  • These are predictability measurements and should not be used for measuring productivity, lest we fuel a gaming effect. Velocity is the primary measure that we use to derive duration of a set of features in an Agile project.
  • Release burn-down charts that show both points completed and points added iteration by iteration are far more meaningful than the simple minded ones.

Other metrics to consider

Running Tested Features (RTF) - Ron Jeffries

  • The desired software is broken down into named features (requirements, stories), which are part of what it means to deliver the desired system.
  • For each named feature, there are one or more automated acceptance tests which, when they work, will show that the feature in question is implemented.
  • The RTF metric shows, at every moment in the project, how many features are passing all their acceptance tests.

Business Value Burn-up - tracked just like story point burn-up, but based on Product Owner assigned business value as delivered

Automated unit and acceptance test results - a quality measure

Defect Count - a quality measurement

  • Post Sprint Defect Arrival (leading indicator)
  • Post Release Defect Arrival (lagging indicator)
  • Defect Resolution (root causes fixed)

Technical Debt - a quality measurement

  • This is “undone” work.
  • Usually occurs when team is driven too hard to produce sprint point output.
  • Technical debt stories are added to the product backlog and prioritized by PO just like any other stories (but probably are coupled with another story that is about to add more debt!)
  • Can track where technical debt is at and what the trending is (a leading indicator for quality.)

Work in process - a lean productivity metric

  • Tracks number of items the team has in process at all times.
  • Want this to trend to 1.  If it gets too high, a Scrum Master may want to foster better collaboration.

Story Cycle Time - a lean productivity metric

  • Tracks how long a story goes from in work to done.
  • Helps keep team focused on surrounding a story and letting it to done.

Code metrics – to aid in making sure that you don’t accumulate technical debt over time

  • Cyclomatic complexity
  • Coding standards violations
  • Code duplication
  • Code coverage
  • Dead code
  • Code dependencies incoming/outgoing (coupling)
  • Abstractness (abstract and interface class versus concrete classes)
  • WTFs/minute (my personal favorite)

How should I proceed?

Here are some take-a-ways:

  • Be smart about what you measure
  • Don’t measure just because you can
  • Make use of the metrics you gather or don’t bother collecting them
  • Look for process metrics that reinforce Lean and Agile
  • Look for code metrics that are useful for keeping technical debt down

If we get what we measure, we will become an Agile and Lean software making machine! 

See a comprehensive resource on measuring quality at Quality metrics: A guide to measuring software quality.

Dig Deeper on Topics Archive