Software testers often complain that software requirements specifications are too vague to be tested. How do you...
determine whether a requirement is fully developed?
There are a few rules in writing a software requirements specification that apply in this case.
Requirements must be measurable, atomic (that is, focused on a single goal) and unambiguous, and both the testers and the developers must confirm their understanding of the software requirements specification.
Untestable requirements result when the author of the requirement doesn't appreciate the need to test requirements or the level of specificity required for effective testing.
The best way to remove ambiguity when writing a software requirements specification (SRS) is to apply active-listening techniques. To do this, you have the testers (and developers, but we'll focus on testers) communicate back to you what they believe the requirement entails. This can be done verbally, but that means there's a possibility that the shared understanding (at the time of discussion) will be lost before the tests are created. A very effective technique to use in this situation is to have the testers document their test design. The design of the tests embodies the tester's understanding of what needs to be tested, just as the developer's design documents detail what the developer intends to build.
As the author of the requirements, you will review the test design and confirm that there is a test for every acceptance criteria for the requirement. Any criteria of acceptance that does not have associated tests is the result of a communications failure, regardless of where blame lies. There may be unnecessary tests in the test design. That, too, is a sign of a communications failure, where the tester is assuming an acceptance criterion that the author did not intend. Reviewing this test design requires the author of the requirements to be sufficiently analytical to appreciate all of the nuances, permutations and scenarios by which the solution will be exercised within the context of the atomic requirement.
An atomic requirement is one that can only be measured as complete or not complete. For example, a requirement that states that the user will provide authentication and be granted access to the application is not atomic. You would need two separate requirements -- one that reflects the need for users to be authenticated and one that provides access to the application only to authenticated users. Each can be developed separately, and each can be tested separately.
A benefit of this approach is that it tends to lead developers to write more modular code. More modular code allows for adaptation and requirements reuse in the future with less effort. For example, part of expanding the market for your software into new markets where security requirements are more stringent, you may introduce requirements for two-factor authentication in the future. If the software was written in such a way that there is a module for authentication that is distinct from the code for granting access to authenticated users, it will be much easier to develop and test changes to the authentication mechanisms.
The active-listening exercise is very effective at discovering requirements that cannot be tested.
Measurable requirements are sometimes straightforward. For example, "passwords must have at least eight characters" is very clear, whereas "passwords must be secure" is not, because the meaning of the word "secure" is not clear. In situations like this, reviewing the goal of the requirement is valuable. In this case, the goal is preventing unauthorized access, so acceptance standards that actually reflect a level of security are better than acceptance standards that are simply easy to measure.
You might say, for example, that the system must not allow an unauthorized user to gain access with a brute-force attack in under one hour. An eight-character password of "password" would not survive this attack, so the eight-character limit is a bad requirement.
It's also important to provide your developers with some flexibility. They may, for example, choose to "lock out" an account that has had three successive unsuccessful authentication attempts. It is important when defining the measurements that you identify the rationale behind the measurements -- simultaneously making the requirements testable and fundamentally better.
The active-listening exercise, particularly when done through the creation and review of a test-design document, is very effective at discovering requirements that cannot be tested. Ambiguity is revealed when the author of the requirements discovers that the test design does not address important acceptance criteria. Lack of atomicity is discovered during the analysis of the test design, as opportunities for decomposition of the requirement into multiple requirements become apparent. Unclear measurement objectives become apparent when the test design fails to account for precise measures -- typically by combining a collection of tests that will pass (acceptable measures) and tests that will fail (unacceptable measures). When either failing tests or passing tests are absent or the threshold between passing and failing is not what the author intended, the misinterpretation becomes apparent.
As a bonus, the test-design document can be shared with the developers, providing even greater clarity about what is expected of them, allowing them to "test" their solution prior to submitting it to the testers as part of their personal development process. This results in more efficient software development.
Once requirements authors are comfortable with this process, they will begin to mentally walk through the same exercises when documenting the requirements and when gathering market information needed to reach clarity. As they accumulate experience, less time will be required during the active-listening cycle, as the requirements will require less revision in order to be more testable.
What are you doing to avoid vague requirements that can't be tested? Let us know and follow us on Twitter @SoftwareTestTT.
Dig Deeper on Software Requirements Gathering Techniques
Related Q&A from Scott Sehlhorst
Application performance monitoring fixes a system before it breaks. IT strategist Scott Sehlhorst offers insight into preventive performance testing.continue reading
'Continuous development' is still a relatively new and confusing term. Find out what it means beyond the hype.continue reading
Scott Sehlhorst offers strategic guidance on how to approach application portfolio management with a focused vision.continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.