News Stay informed about the latest enterprise technology news and product updates.

A focus on metrics says shift software quality focus left

Yesterday’s keynote speaker here at QUEST 2014 in Baltimore was Michael Mah. Mah is an interesting guy. He played a part in designing nuclear submarines for the U.S. government in the 1980’s. More recently, he supported the Sea Shepherds in their efforts to curb whale and dolphin hunting in Asia and took up flying very small Cessna aircraft. He happens to be a brilliant software engineer and he seems to be on a mission to fix the way we develop software. The two big sound bites I took away from his keynote are, “Without metrics you’re just someone with another opinion,” and “Shift left.” The focus on metrics is pretty straight forward. It’s also not surprising given that Mah’s title as part of the Cutter Consortium is benchmark practice director. The need for software quality efforts to shift left is way more interesting, but if you’ll excuse me I’d like to cover metrics first.
I think some Agile enthusiasts (definitely not all, but some) want to play Agile by feel. When they know things are working then why take the time to measure and prove it? If they know things aren’t working then why bother to measure it? They just stop doing it and look for something that will work. Either way, the metrics just get in the way.
But that has two problems. Sometimes it’s not obvious whether or not a project is really working. A project manager might hope it’s going to work or be worried that it’s not going the way it should and not know which one is true. Plus, people are fallible. Sometimes we’re wrong about things – even when we know we’re right.
Mah’s stance is that project decisions should be made on tangible evidence. Project metrics give us the evidence we need. And, he said, we’re already gathering those metrics to begin with. The right metrics, according to Mah, are velocity, end to end project schedules, the time size, and bug counts. These are what his software management company – QSM – uses to produce their industry benchmarks. Measuring these four dimensions of a software development effort lets you make apples to apples comparisons about what works and what doesn’t.
Now that we have the metrics part out of the way. What’s all this business about shift left? I hear the phrase tossed around a bit, but I’ve never seen its meaning spelled out plainly. My intuitive understanding of the phrase is that it comes from charts that tend to use the x axis to represent time. The further to the left you go the earlier in the process an event occurs. Certainly Mah was advocating software development teams should focus on software quality and testing earlier in the process.
Mah mentioned current trends like the increased focus on test-driven development (TDD). He said that TDD wasn’t really anything new. Although he didn’t have that handy acronym, TDD is just how his team worked on the navigation systems for the Trident project. They started with what the crew of the submarine would have to do to pilot the vessel. Then they figured out what tests the software would have to pass in order to be successful. Then they started coding.
He said they knew, based on the tight deadlines and military strictures around the project, that they would not have the time or resources to fix major bugs at the end of the development cycle. They had to ensure no major bugs got built in to begin with. By shifting that software quality focus left and using techniques similar to TDD and paired programming, the Trident software engineering team ensured that they built the right software the right way the first time. Those concepts are tried and true. The more prepared developers are to build it right the first time, the less work there is for software testers in the end.
Here’s looking forward to another insightful and inspiring day at QUEST. There’s much more to come.

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

Well said. Totally agree.
We pulled metrices on number of defects found in ST and E2E testing per test case for a range of projects in last year or so. We found a pattern, and could relate to projects having limited/no unit tetsing/system tetsing resulting in more defects E2E defects.