How important is it to the testing process that requirements be managed properly? In what ways can team members gather and manage requirements that could facilitate testing?
Properly managing requirements is important to both development and testing, but it's not as important as certain -- primarily tool -- sources proclaim. Like so many buzzwords, requirements management is used by various folks to mean widely different things -- usually without recognizing the existence of said differences. Some use the term "requirements management" to refer to the requirements process as a whole; in my humble opinion, that's not accurate.
I find it more appropriate to restrict requirements management to the mechanical administrative activities of capturing, categorizing, tracking and controlling changes to requirements. The rest of the requirements process involves the far more difficult and important steps of identifying and evaluating the requirements content.
Lest you think I'm alone in making these distinctions, be aware that most requirements management automated tools do only the former functions. Even those that do fit the complete requirements definition generally do so by structuring the requirements into use cases and confirming that various structural elements are present. It's also instructive to note that the original Capability Maturity Model had a level-2 key process area called "requirements management," whereas its successor, Capability Maturity Model Integrated, added a higher maturity level-3 "requirements development" key process area.
Probably the most common tool for the administrative area of requirements management is a traceability matrix. A traceability matrix shows where a requirement comes from and all the places where it was used, such as with various design components and tests. It is physically laborious to capture each cross-reference. All the automated tools aid in maintaining cross-references, but tools cannot help identify what the cross-references are that must be captured. The tools do save IT from some grunt work by generating current traceability matrix versions when cross-reference data has been manually updated in the tool, such as when a test has been passed or failed.
Tool vendors are especially prone to overstating the value of traceability matrices. Tracing forward from requirements can reveal ones that are not tested at all, and tracing backward from tests or components can reveal ones that seem not to relate to a requirement. To be practical, tools trace only high-level requirements, and one test related to each requirement is sufficient to generate a checkmark noting it has been tested. That in no way indicates the thoroughness or adequacy of those tests.
The key to more adequate requirements and tests is primarily in the definition, and only secondarily in the managing. Of course, knowing when things change is essential. This is another aspect of requirements management, as is limiting change. But the best way to control change is to make the requirements content more accurate initially.
Requirements provide value when a design of a product, system or software is implemented to meet them. Effective tests demonstrate that the designed and implemented product in fact satisfies the requirements. However, many tests merely demonstrate that the product was implemented as designed, which by itself provides no value.
Prioritizing requirements and defining them to the point where they are clear and testable increases the chance that the right product will be built correctly. It also provides more test points to confirm the product has been constructed correctly. Tracing tests to more detailed requirements can add confidence of coverage, but it still won't tell how adequate the tests are. Few trace in such detail because the greater the granularity, the more time-consuming it is to identify and capture cross-references.
Requirements management in collaboration with ALM tools
Trends in ALM: Requirements management tools
How to implement traceability into requirements management