Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

An expert suggests how to measure software quality

This expert says the trick to measuring software quality is focusing on real business requirements and established engineering standards.

How to measure software quality is one of the most heated debates in the world of software development. Learned and novice software quality specialists alike debate which factors are most important to software quality. Many say that software quality simply can't be measured in any meaningful and accurate way. I posit that software quality can be measured and that arguments over how to do so can be settled with a solid definition of what we mean when we say "software quality."

Software developers often consider software quality to mean the software fits system specs, runs efficiently, doesn't blow up, follows standards, uses current technology and techniques and can be modified easily. Software developers frequently feel users and managers are not very attentive to these things and thus conclude that users do not care about quality.

Users and managers often consider software quality to mean the software does what needs to be done correctly, performs adequately, runs reliably and consistently, is easy to use, is supported quickly and correctly, and is delivered on-time and within budget. Users and managers frequently feel software developers are not attentive to these things and thus do not care about quality.

In fact, both views are part of software quality; and both sides need to know it. However, the user and manager view is more important, because if that view is not satisfied, the software developer view is irrelevant. With this in mind, let's look at some popular, but flawed definitions of software quality.

  • Software quality is customer satisfaction. Indeed customer satisfaction should be a result of delivering a quality product, but satisfaction can be influenced by many things and is not the same as quality. Moreover, customers routinely are satisfied by poor quality and not satisfied by high quality.
  • Software quality is conformance to requirements. This may be the most popular definition in the software quality community and traces to Philip Crosby. It seems so obvious and straightforward until you realize it leaves out the quality of the requirements. Conformance to wrong requirements is not quality.
  • Software quality is a function of the percentage of the product that is free from defects. This definition is not necessarily quoted from W. Edwards Deming, but it would seem to fit with his work. While the presence of defects would affect one's assessment of a product's quality, we can only measure the effects we detect. Therefore, lack of defects may be more dependent on the specification or measurement quality than on the quality of the product itself.
  • Six Sigma defines quality as minimal variation within specification. This definition again neglects to consider the quality of the specifications and how those are measured.

Each of the above definitions offers something useful. However, each also has its own shortcomings. Creating a more workable definition of quality requires us to consider a few additional factors.

First, there are two types of requirements. What I call real business requirements (or the whats) are deliverable capabilities that, when met, solve problems, create opportunities or meet challenges. They are conceptual. They exist within the business environment so they must be discovered. They take the viewpoint of the customer or stakeholder. There are usually many ways to fulfill the real business requirements.

There are also product or system requirements, which are features of a human-defined system that presumably is one of the possible ways how to satisfy the real business requirements. Software developers write code to meet system requirements. Real business requirements do not decompose into system requirements; rather, system requirements are built in response to real business requirements. Driving real business requirements down to lower levels of detail is needed to enable mapping the software requirements to real business requirements and thus provide value.

Real business requirements need to address three quality dimensions:

Quality of design identifies what is needed in terms of appropriately and meaningfully understood required functions, capabilities and performance levels to address needs of all stakeholders. Design must suitably meet requirements. Trade-offs are based on accurate and adequate costs, benefits and schedules.

Quality of conformance addresses how a product is produced. Products must conform to design, apply appropriate standards or conventions, and be delivered on time and within budget. Workers must use expected skill and care applying defined methods and tools. Management must use appropriate practices.

Quality of performance deals with how the product is delivered. The product must be available as needed for use, work reliably and accurately in its intended manner, handle workload adequately, and be supported and maintained responsively.

In addition, requirements must address exterior functionality, interior engineering, and future quality factors (which are often mistakenly called "non-functional requirements" or "quality requirements"). Quality factors describe how well business capabilities will perform and often are captured in engineering standards.

Putting those underlying concepts together, we can answer the how to measure software quality question. It can be defined as the extent to which software meets relevant, weighted, stated and implied, exterior, interior and future real business requirements of all affected internal and external stakeholders consistent with standards of design, workmanship and performance. Or in short, software quality means how well software meets relevant real business requirements consistent with standards.

Quality is absolute. That is, software quality is an engineering concept that can be defined objectively and thus measured by relevant parameters. However, the amount of quality one receives is governed by available resources, priorities and other constraints. Value is the perceived benefit of quality received relative to the costs producing and receiving it.

Next Steps

Check out Gerie Owen's five rules for software quality metrics

And learn more about objective software quality metrics

Interested in measuring the value of data?

This was last published in April 2015

Dig Deeper on Software Quality Management

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

7 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

"This expert says the trick to measuring software quality is focusing on real business requirements and established engineering standards."

The article talks a lot about different ways to look at quality, but I somehow, didn't see a way to measure it.  How does one measure, conformance to requirements, or conformance to design?
Cancel
@ Veretax, thank you for your question. First, one small clarification. I use the term ‘engineering standards” generically to mean the way classical engineering defines how to do something well. That is different from the handful of formal standards that you may have thought I was referring to.
Engineering standards define quality in terms of tolerances, or degrees of acceptable variance from requirements. Such tolerances are not generic or one size fits all. Rather, they must be specific to the particulars of the requirements. Degree of acceptability is a function of the extent to which the variance affects the satisfaction of the requirement. Different situations can tolerate different variances. For instance, one could measure objectively how tight a window is. We routinely live with house windows that leak a bit, whereas the same even minor leakage would not be tolerable on the space station.
In software, we do all kinds of testing to measure conformance, usually to design but sometimes also to requirements. Whereas engineering standards define in advance the criticality of various degrees of failure to satisfy tolerances, typical testing evaluates defect criticality somewhat after the fact on a case by case basis. Both, but especially software, are weak at detecting requirements and design defects, especially omissions. In my experience, classical engineering is far more likely to relate such conformance measures to quality in terms of providing needed value by how well what I call REAL business requirements are met. I fear the software world tends too often to view detected defects without the same context of quality, often equating quality with the number of defects and generic criticality ratings without necessarily actually relating the conformance measures to how well REAL business requirements are met.
Cancel
I think this article might have benefited from a concrete example. There's a lot in the three-part definition of quality that feels vague, perhaps due to the nature of software being such a diverse field. Seeing this applied to provide a specific measurement would have increased my understanding.
Cancel
Thank you, I completely agree with you, in particular your first point: 
"Software quality is customer satisfaction". 

In my organization, we have so far managed to avoid reporting quality metrics to upper IT management. I'm glad, because in my opinion, that can only go poorly. They're managers, and they want to manage things. They want nice numbers so that they can make graphs and reports. They want to compare teams. 

We have tried to explain that the best measurement of quality is how happy our customers are. I work in QA, and consider it my responsibility to advocate for the user experience. Luckily, we have great developers who follow good development practices, too. We do need to focus a bit more of nailing down system requirements, though. That's one area where our quality could use some improvement.
Cancel
Whenever an organization starts looking to "measure quality" I start to get nervous. Not because the endeavor in and of itself is a bad one, but the implementation of those measurements always (and yes, I do mean always) leaves much to be desired. Having said that, yes, there are real measures to quality that can be measured, and in general, the positivity level of the customers and feeling their needs are being met in a timely manner tell more than any number of measured metrics do.
Cancel
Good explanation. Software quality is foremostly a performance measure. A bug-free software product that resembles at least to the 97% of the business requirement is considered as a standard software product. Scalability does matter but in the later stages.

There is a point that says software must follow "appropriate standards" which is quite conflicting. Standards are globally accepted practices that to depend only on them will confuse a product being developed. One must consider user requirements as a finishing line and walk straight on it.
Cancel
Thank you for your comments.
Cancel

-ADS BY GOOGLE

SearchMicroservices

TheServerSide

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close