What does it mean to test a requirement definition? What are the benefits of doing this?
Funny you should ask. I don't recall anyone but me using a term similar to "test a requirement definition," so my interpretation may not be what was meant by whomever you heard use the phrase. Moreover, the phrase seems to cause confusion.
Thus, I've had to rename several times a seminar I present that originally was titled, "21 Ways to Test that Requirements Are Right." Prospective attendees often got confused about course content and thought it was a class on writing requirements-based tests to demonstrate that developed programs conform to the requirements. Such confusion continues even with the latest course title, "Evaluating Business Requirements Adequacy."
I think part of the confusion stems from coupling the words "test" and "requirements." People don't seem not realize that tests can be dynamic or static. Dynamic tests execute the thing being tested, such as a developed program to demonstrate that it meets requirements. Static tests are anything short of actually executing the thing being tested and typically take the form of reviews, walkthroughs, or inspections.
I interpret the phrase "test a requirement definition" to mean a static test, such as a review of a requirements definition to determine if the requirements as defined are clear, complete, and correct. If requirements do not meet these criteria, then the products/systems/software developed to satisfy the requirements will not be suitable. The user/customer/stakeholder's needs will not be met in a timely manner, and often considerable additional time and expense will be needed to revise the developed product/system so that it in fact does provide desired value. This is often called "creep" and is mistakenly attributed to changing requirements rather than changing awareness of what the REAL requirements have been all along. Creep is a major cause of project budget and schedule overruns.
High costs for errors not caught during requirements
My consulting clients and seminar attendees repeatedly ratify the accuracy of oft-cited statistics that the cost of fixing a requirements error increases by orders of magnitude with each successive life cycle phase. That is, it will cost 10 times as much to fix a requirements error after it's been programmed than if the error is fixed in the requirements before it's programmed. Similarly, it will cost 100 to 1,000 or more times as much to fix a requirements error in a program that has gone into production than it would have cost to fix the error in the requirements before they the program was written to satisfy the requirements.
These figures may well be conservative, at least partly because they're missing a critical initial life cycle phase. That is, we can tell that what they call "requirements" refers to product/system/software requirements, which actually are a form of high-level design, because developers program from designs rather than from REAL business requirements, which need to be defined before the product/system/software that is presumed to satisfy them.
Consequently, the downstream costs of fixing problems caused by errors in REAL business requirements will be even greater than the figures cited for fixing problems stemming from errors in product/system/software requirements.
In my experience, very few organizations adequately test/review their requirements to improve accuracy and completeness. Many organizations don't review their requirements at all, either because they don't have requirements, their requirements are actually dictates from above that cannot be challenged for political reasons, or they don't how to review them.
Those that do review requirements typically are far less effective than they presume because they use only one or two relatively weak review techniques and don't realize how weak the methods really are. Although they don't recognize it and may argue vehemently to the contrary, ordinarily their reviews concentrate almost entirely on issues of form rather than substance. Such reviews mainly emphasize clarity and testability, which is primarily a function of clarity. Both clarity and testability are form issues -- since a requirement can be clear, testable, and wrong -- and clarity and testability are irrelevant for overlooked requirements.
The seminar I've mentioned, and my book Discovering REAL Business Requirements for Software Project Success, describe more than 21 ways to review requirements. Some of those are the familiar ways to review formats, which often are described in the context of how to write "good" requirements. In addition, though, a number of the methods use special techniques to reveal overlooked and incorrect requirements content errors.
It's not the amount of time one spends reviewing requirements that makes the payoff difference. Instead, it's understanding the need to have detailed REAL business requirements and use more of these more powerful 21-plus ways to review that they are right. Finding requirements errors early before they turn into more expensive errors is the single most effective way to improve delivering quality on time and in budget.
The importance of testing software requirements
Tuning up your software requirements reviews
Software requirements sign-off essential for solid QA
Related Q&A from Robin F. Goldsmith
How do you engage high-level business executives in the process of writing business requirements?continue reading
Why don't users seem to appreciate typical software QA testing status reports?continue reading
What is the value of online discussion forums? This expert sees the good and the bad in online forums.continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.