Measuring software has always been complex, but in their new book, The Economics of Software Quality, Capers Jones...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
and Olivier Bonsignour help explain some of the differences between how we’ve measured software quality in the past and how we measure it today. In part one, we learned about structural quality. In part two, we explored more of the 121 quality attributes listed and their rankings. In this third and final part of the series, we look more at changes in the software industry, such as geographically dispersed teams, defect tracking in Agile environments and how these changes have affected our software quality measurements.
SSQ: It doesn’t look like there are line items that are based on the physical location of the members of the development team. Do you think there would be differences in quality if a team is co-located vs. geographically dispersed, for example? What about teams in which the testing is outsourced?
Capers Jones/Olivier Bonsignour: When Capers Jones first started collecting data in the 1970’s every additional location to a project reduced productivity by about 5%. For one large system in the 1970’s that involved a dozen locations in Europe and the United States, the cost of air fare and travel was actually more expensive than the cost of the source code.
Today in 2011 with the internet, intranets, wiki groups, Skype and Google phone calls, webinars and other convenient methods for sharing data and communication, there is no real difference between distributed teams and co-located teams assuming state of the art communication channels are deployed.
It would be technically possible to have three teams located 8 hours apart and send each day’s work to the next team at the end of first shift. This would provide 24-hour around the clock development with zero overtime because every team would be working only during their first shift.
SSQ: Though “Accurate defect measurements” and “Use of formal defect tracking” were both ranked as 10.00, there are some successful Agile teams that do not feel defect tracking is necessary, at least not pre-production. They argue that customer satisfaction is their quality metric. Why is formal defect tracking listed as so valuable to quality?
Jones/Bonsignour: For more than 40 years, customer satisfaction has had a strong correlation with volumes of defects in applications when they are released to customers. Released defect levels are a product of defect potentials and defect removal efficiency.
The Agile community has not yet done a good job of measuring defect potentials, defect removal efficiency, delivered defects or customer satisfaction. The Agile groups will not achieve good customer satisfaction if defect removal efficiency is below 85%. It will be that low unless measurements are used.
SSQ: Over the years, the way we develop software as well as our quality metrics has changed. An example is that “Lines of code quality measures” is ranked as a -5.00. We also see the trends towards the use of Agile methodologies and away from the waterfall approach of software development. What do you see as the biggest changes in how we measure software quality today as opposed to how we’ve measured it historically?
Jones/Bonsignour: The general level of quality understanding in the software industry is roughly equivalent to the level of medical understanding before sterile surgical procedures were introduced. Achieving good quality needs a combination of defect measurement, defect prevention, pre-test defect removal such as static analysis and testing, scientific test case design and automated tools.
But many companies even in 2011 have no knowledge of defect prevention, bypass pre-test activities or design test cases without proper methods, and depend upon untrained developers as testers rather than using certified test personnel. Also, the inability to control Structural Quality issues hampers management visibility of root causes of software failure and cost. This explains why the average percent of bugs removed prior to release is only about 85% when it should be 99%.
Traditionally, metrics of structural software quality counted the structural elements of a component such as the number of decisions in the control flow. However, these metrics only suggested the possibility of a problem. Today we are basing structural measures of software quality on detecting patterns in the code that represent known violations of good architectural or coding practice. These newer measures are more direct measures of quality rather than being correlated measures.
SSQ: What would be the biggest takeaway you would like readers to get from your book?
Jones/Bonsignour: It you approach software quality using state-of-the-art methods, you will achieve a synergistic combination of high levels of defect removal efficiency, happier customers, better team morale, shorter development schedules, lower development costs, lower maintenance costs and total cost of ownership (TCO) that will be less than 50% of the same kinds of projects that botch up quality.
You can’t manage what you don’t measure. By measuring the software product, which is the output of their software development process, software executives can manage their organization and the assets on which their business depends.
For more of this interview see:
This Q&A is based on The Economics of Software Quality (Jones/Bonsignour) Addison-Wesley Professional, and can be purchased by going to http://www.informit.com/title/0132582201
For a comprehensive resource on measuring quality, see Quality metrics: A guide to measuring software quality.