Software testers often complain that software requirements specifications are too vague to be tested. How do you determine whether a requirement is fully developed?
One of the biggest requirements issues is "lack of testability," which is largely due to unclear or ambiguous requirements. When a software requirement specification is unclear, software to meet it is likely to be designed and developed incorrectly; and tests to confirm that the code meets the unclear requirements also are likely to be wrong.
That said, I think testers protest too much about software requirements specifications being too ambiguous. Testers who get worked up about ambiguity often alienate those they are working with. They may demand that the analyst rewrite the requirements before they will start testing the code against them. However, the analyst doesn't have the time or desire to redo his or her work. The project already is late, so testing can't wait any longer. Others perceive the tester as a trivial nitpicker who is wasting precious time with a broken-record refrain. The tester aggravates the situation by complaining that, "Nobody else cares about quality."
Not only is the tactic ill-advised, but the lack of testability premise is wrong. Eliminating ambiguity (as testers often demand) is not practical. For example, millions of words have been written trying to remove ambiguity from the Internal Revenue Code, only to make the tax law virtually unintelligible.
More importantly, clarity is a form issue, not a content issue. Software requirements specifications can be perfectly clear and perfectly wrong; clarity and testability are irrelevant for an overlooked requirement. Focusing exclusively on testability actually interferes with finding the more important wrong and overlooked requirements content issues.
Stop wasting time and good will yammering about testability. Instead, try this approach: Write positive and negative tests that demonstrate whether the code works the way you think it should to satisfy the requirements as you interpret them. If your interpretation is correct, your tests are all set. If your interpretation differs from the developer's, the concrete nature of the failed test makes it easier for everyone to understand the requirements issue and determine what the code or test should be.
Instead of rewriting requirements, which often won't happen, use the tests as a supplemental form of requirements. This very simple, straightforward approach existed long before Agile folks think they invented it. Regardless, it works. You can take advantage of it with any methodology and without buzzwords.
Testers' involvement in requirements gathering important
Quality assurance and testing's role in requirements
Software testers' professional development
Dig Deeper on Topics Archive
Related Q&A from Robin F. Goldsmith
Using a WBS can help make a big task like requirements easier. Expert Robin Goldsmith explains how developers and testers can make the most of this ... Continue Reading
How do you engage high-level business executives in the process of writing business requirements? Continue Reading
Why don't users seem to appreciate typical software QA testing status reports? Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.