How to measure software quality is one of the most heated debates in the world of software development. Learned and novice software quality specialists alike debate which factors are most important to software quality. Many say that software quality simply can't be measured in any meaningful and accurate way. I posit that software quality can be measured and that arguments over how to do so can be settled with a solid definition of what we mean when we say "software quality."
Software developers often consider software quality to mean the software fits system specs, runs efficiently, doesn't blow up, follows standards, uses current technology and techniques and can be modified easily. Software developers frequently feel users and managers are not very attentive to these things and thus conclude that users do not care about quality.
Users and managers often consider software quality to mean the software does what needs to be done correctly, performs adequately, runs reliably and consistently, is easy to use, is supported quickly and correctly, and is delivered on-time and within budget. Users and managers frequently feel software developers are not attentive to these things and thus do not care about quality.
In fact, both views are part of software quality; and both sides need to know it. However, the user and manager view is more important, because if that view is not satisfied, the software developer view is irrelevant. With this in mind, let's look at some popular, but flawed definitions of software quality.
- Software quality is customer satisfaction. Indeed customer satisfaction should be a result of delivering a quality product, but satisfaction can be influenced by many things and is not the same as quality. Moreover, customers routinely are satisfied by poor quality and not satisfied by high quality.
- Software quality is conformance to requirements. This may be the most popular definition in the software quality community and traces to Philip Crosby. It seems so obvious and straightforward until you realize it leaves out the quality of the requirements. Conformance to wrong requirements is not quality.
- Software quality is a function of the percentage of the product that is free from defects. This definition is not necessarily quoted from W. Edwards Deming, but it would seem to fit with his work. While the presence of defects would affect one's assessment of a product's quality, we can only measure the effects we detect. Therefore, lack of defects may be more dependent on the specification or measurement quality than on the quality of the product itself.
- Six Sigma defines quality as minimal variation within specification. This definition again neglects to consider the quality of the specifications and how those are measured.
Each of the above definitions offers something useful. However, each also has its own shortcomings. Creating a more workable definition of quality requires us to consider a few additional factors.
First, there are two types of requirements. What I call real business requirements (or the whats) are deliverable capabilities that, when met, solve problems, create opportunities or meet challenges. They are conceptual. They exist within the business environment so they must be discovered. They take the viewpoint of the customer or stakeholder. There are usually many ways to fulfill the real business requirements.
There are also product or system requirements, which are features of a human-defined system that presumably is one of the possible ways how to satisfy the real business requirements. Software developers write code to meet system requirements. Real business requirements do not decompose into system requirements; rather, system requirements are built in response to real business requirements. Driving real business requirements down to lower levels of detail is needed to enable mapping the software requirements to real business requirements and thus provide value.
Real business requirements need to address three quality dimensions:
Quality of design identifies what is needed in terms of appropriately and meaningfully understood required functions, capabilities and performance levels to address needs of all stakeholders. Design must suitably meet requirements. Trade-offs are based on accurate and adequate costs, benefits and schedules.
Quality of conformance addresses how a product is produced. Products must conform to design, apply appropriate standards or conventions, and be delivered on time and within budget. Workers must use expected skill and care applying defined methods and tools. Management must use appropriate practices.
Quality of performance deals with how the product is delivered. The product must be available as needed for use, work reliably and accurately in its intended manner, handle workload adequately, and be supported and maintained responsively.
In addition, requirements must address exterior functionality, interior engineering, and future quality factors (which are often mistakenly called "non-functional requirements" or "quality requirements"). Quality factors describe how well business capabilities will perform and often are captured in engineering standards.
Putting those underlying concepts together, we can answer the how to measure software quality question. It can be defined as the extent to which software meets relevant, weighted, stated and implied, exterior, interior and future real business requirements of all affected internal and external stakeholders consistent with standards of design, workmanship and performance. Or in short, software quality means how well software meets relevant real business requirements consistent with standards.
Quality is absolute. That is, software quality is an engineering concept that can be defined objectively and thus measured by relevant parameters. However, the amount of quality one receives is governed by available resources, priorities and other constraints. Value is the perceived benefit of quality received relative to the costs producing and receiving it.