Let me suggest there are at least two possible ways of interpreting this question, and I'll briefly respond to...
What are some possible metrics requirements that would lead to continued improved reliability in the finished software?
A product/system/service/software is reliable when it repeatedly performs in the same manner. That reliability needs to be coupled with the performance being suitable. To assure and improve continued suitable performance, measures need to be taken regularly of the performance and factors which reasonably influence the performance.
Suitable performance includes things typically thought of as "performance," such as the range of required loads capacity handled and the sufficient response and/or throughput speeds under each of the given load, typically a set of ranges or quantified definitions of high, medium and low, and operational environments. For instance, a web-page of given size and complexity might have to download fully to a user's PC in less than two seconds when using a high-speed connection and less than ten seconds when using a 56K dial-up connection. Such measures usually are defined in terms of averages over multiple usages and/or durations.
Suitable performance also includes providing required functionality, including suitable usage under identified usage situations, at specified levels of accuracy and adequacy. For example, that downloaded web page must display on the user's browser in a readable manner, both with regard to visibility and placement. Visibility in turn could be defined with respect to brightness, contrast, and color-blindness metrics. Text and images also should be interpreted in a consistent and correct manner. Functionality also means that the content is correct.
Defects are failures to meet the specified requirements. The software is deemed to provide suitable functionality when it has no more than the number of acceptable defects of each severity categorization for given degrees of usage of that product. For instance, software running a life-critical medical device probably cannot tolerate any high-severity defects that cause the product not to function at all and perhaps no more than ten low-severity defects such as cosmetic issues with packaging—recognizing that a seemingly cosmetic packaging defect could be high-severity if it causes the device to be misused.
Realize also that battery failure would be considered a high-severity defect if it occurs within say five years of device installation but perhaps acceptable functionality thereafter, provided the device has suitable means of alerting about and replacing a dying battery. Metrics for the alerting and replacing also could be defined.
What are some requirements-related metrics which would improve the software process' ability to produce more reliable software?
Many of the relevant requirements-related metrics actually are collected later in the software process and include the results measures as described above regarding product reliability.
In addition, it's important to measure the activities involved in defining the requirements and the indicators of requirements quality available during the development process prior to the product results themselves.
Requirements definition measures include the effort and duration of requirements discovery activities per skill type and level, including users/stakeholders. Such activities include differentiating data gathering techniques, analysis methods, drafting, review, and revision. If tools are employed, they need to be reflected too.
Requirements quality should be measured at each point where requirements issues can be detected, especially requirements reviews, but also creating and reviewing designs and code, and each of the levels and types of testing.
One of the big challenges with measures of effort and duration is that they don't reflect skill and applying appropriate methodology, which undoubtedly are the far greater determinants of requirements adequacy, which in turn is manifested in resulting product reliability.
For instance, the conventional wisdom is that lack of user involvement reduces requirements quality, which is true, but too often leads to the overly simplistic conclusion that merely increasing user involvement is sufficient to produce suitable requirements. The fallacy of this seemingly logical conclusion is that just spending more time doing the same ineffective things the same ineffective ways won't improve your requirements appreciably.
For 15 years, the Standish Group's CHAOS report repeatedly has attributed poor project outcomes mainly to inadequate user involvement. Everyone knows user involvement is important. What they don't know, because conventional analyses such as the Standish Group's don't know it either, is that conventional requirements definition is destined to inadequacy regardless of how much time users are involved.
So long as requirements definers focus mainly on requirements of the product/system/software they expect to create, they will continue to encounter creep and product inadequacy, including unreliability. Such issues diminish dramatically when one learns to effectively first discover the REAL, business requirements deliverable whats that provide value when satisfied by the product/system/software how.
Dig Deeper on Building security into the SDLC (Software development life cycle)
Related Q&A from Robin F. Goldsmith
Using a WBS can help make a big task like requirements easier. Expert Robin Goldsmith explains how developers and testers can make the most of this ... Continue Reading
How do you engage high-level business executives in the process of writing business requirements? Continue Reading
Why don't users seem to appreciate typical software QA testing status reports? Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.