The trace matrix is a well-established test coverage tool. Let me offer a quick definition -- the purpose of the trace matrix is to map one or more than one test case to each system requirement, the trace matrix is usually formatted in a table. The fundamental premise is that if one or more than one test case has been mapped to each requirement, then all the requirements of the system must have been tested and therefore the trace matrix proves testing is complete.
I see flaws with this line of reasoning and here are my primary reservations on the over-reliance of the trace matrix:
- A completed trace matrix is only as valuable as the contents. If the requirements are not complete or clear than the test cases designed and executed might fulfill the requirements but the testing won't have provided what was needed. Conversely if the requirements are clear but the test cases are insufficient then a completed trace matrix still doesn't indicate the testing coverage and confidence that is being sought by a completed table.
- The trace matrix design relies too stringently on system requirements -- that is the primary design of the trace matrix -- to ensure all system requirements have been tested. But all sorts of defects can be found outside of the system requirements that are still relevant to the application providing a solution for the customer. By looking only at the system requirements and potentially not considering the customers' needs and real life product usage, essential testing could be overlooked. Testing only according to specified requirements may be too narrowly focused to be effective in real life usage -- unless the requirements are exceptionally robust.
So how do you call the end of testing? And how can you assure test coverage?
- To be able to assure coverage at the end, I'd start with reviewing the beginning -- look at the test planning. Did your test planning include a risk analysis? A risk analysis at the start of a project can provide solid information for your test plan. Host a risk analysis either formally or informally, gather ideas by talking with multiple people. Get different points of view -- from your project stakeholders, talk to your DBAs, your developers, your network staff, and your business analysts. Plan testing based on your risk analysis.
- As a project continues, shift testing based on the defects found and the product and project as it evolves. Focus on high risk areas. Adapt testing based on you and your testing team's experience with the product. Be willing to adjust your test plan throughout the project.
- Throughout testing, watch the defects reported. Keep having conversations and debriefs with hands-on testers to understand not just what they've tested but how they feel about the application. Do they have defects they've seen but haven't been able to reproduce? What is their perception of the current state of the application?
Dig Deeper on Topics Archive
Related Q&A from Karen N. Johnson
There are so many resources out there about the ever-changing world of Web design and mobile testing, but to choose the most salient and insightful ... Continue Reading
In this expert response, consultant Karen Johnson describes strategies she uses for browser compatibility testing. Experience and knowledge of common... Continue Reading
Initiating test automation on your project team may seem challenging, or even overwhelming. Fortunately, expert Karen Johnson has been through this ... Continue Reading