BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
My initial impulse is to respond flippantly with a scorecard of: Developers 2, Requirements 0. That's probably...
not what the questioner had in mind; and of course, I'd never say something so inappropriate. Seriously, though, here are several ideas as to what this unfamiliar (to me) use of "scorecard" may mean.
It may relate to Kaplan and Norton's balanced scorecard, used to relate several key financial and non-financial performance measures to strategy. Thus, learning and growth measures might pertain to an increased ability to discover the right requirements. Business process measures might show how business value changes when requirements are satisfied. Customer measures might describe effects on customer satisfaction by met and unmet requirements. Financial measures might contrast the cost and revenue impacts of meeting and not meeting requirements.
I applaud anyone with such valuable measures, because they get to the essential effectiveness of the requirements process. Unfortunately, few organizations have the awareness or wherewithal to measure them or to recognize that while they roll up from cross-project detail, they are not likely to help get any particular project's requirements right.
"Scorecard" could be another name for a traceability matrix. The matrix cross-references each requirement to the various places it is addressed, such as particular parts of design, code and tests. This helps because tracing forward from the requirements highlights ones that are not addressed and tracing backward from downstream artifacts reveals things that either are extra or need to have an associated requirement defined. By adding the date of each entry, and possibly even graphing the flow, one can better follow the progress of the requirements being addressed. Be careful, though; just because a requirement has been addressed tells us nothing about how well or thoroughly it has been addressed.
Ratings and rankings
The scorecard could be describing relative importance or priority of several requirements. Two approaches are common. Rating, the most common, categorizes each requirement individually. Ratings categories typically are some form of high-medium-low. Numeric weights, such as one to three or one to 10, give a degree of quantification to the same categories. Some methods use more qualitative categories, such as mandatory, useful, desirable. Similarly, MoSCoW analysis uses must, should, could or won't.
Ratings are popular because they seem intuitive and reflect normal ways of evaluating important requirements, but they have a way of being unreliable and not meaningfully differentiating relative importance. Everything ends up being rated as high importance, which can also mean nothing is really of high importance. As an alternative, ranking tends to be a somewhat more difficult but ultimately more effective prioritization technique because it forces distinctions.
This helps because the most important requirement clearly stands out, then the next most important and so forth. The trap is that rankings don't tell us how much more important each requirement is compared to the next lower-ranked one. Therefore, rankings need to be supplemented with weighting that can guide proportionate allocation of effort and resources to respective requirements.
A scorecard could capture "scores" for each requirement with regard to one or more presumably relevant variables. For example, how much each requirement addresses various quality factors such as safety might be rated. Other variables could indicate the extent each requirement affects particular organizational areas or compliance regulations.
In a slightly different context, I've found in acquisitions that risks can be assessed by having proposing vendors self-score with regard to each buyer requirement. For example, whether their product already addresses the requirement, could be customized at no extra charge to address it, could be customized at a specified added charge to address it, or cannot address it. This helps make the vendor evaluation much more reliable because the vendor is promising legally binding identifiable degrees of performance rather than the typical technique of the buyer guessing what they think the vendor can do with regard to each requirement, which is not legally binding on the vendor.
For test planning, I use a tool called a functionality matrix that you could consider a requirements scorecard. It captures requirements in the form of use cases. For each user view step, one or more technical view factors is involved, such as the create, retrieve, update and delete CRUD categories. This enables more thorough testing because each entry on the matrix is something that needs to be demonstrated, and many of the matrix entries reveal requirements that have been overlooked. Additionally, a scorecard could reflect evaluation of the adequacy of each requirement. Categories could include clear, testable, complete, correct, necessary, feasible and missing.
Such a systematic way of capturing evaluations improves manageability and increases likelihood that issues will be addressed and not forgotten. Be careful though that the categories are not treated mindlessly, which can easily happen with any kind of a template.
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.