What features are available in a test management tool?
This is an interesting question. Each of the major commercial options has similar basic characteristics and features, the efficacy of which will vary by the organization’s needs. Many of the open-source options have many of the same features and capabilities as well.
The greatest points of consideration need to be flexibility and ease-of-use for the technical staff who will be working with the tool on a regular basis.
A tool that requires significant rework of test cases for them to be used in multiple projects does nothing to speed testing or reduce frustration among testers. These both directly impact morale and team efficiency, and ultimately, the cost of testing.
A tool that allows test data entry and tracking within the cases (where test data needs to be controlled that rigidly) is keenly important and a huge time and cost saving feature. Likewise, the ability to rearrange test cases as needed for different projects, without recreating them or heavy editing will speed the effort. Also important, the ability to relate defects to specific test cases along with the tests for the subsequent fixes, can help with analysis of the project and lessons-learned on completion.
Being able to associate test cases, or even steps, to specific documented requirements may be helpful in some environments. However, these tend to be limited to environments with truly rigorous requirement definition and discovery practices in place. Far too often, skilled testers may see an inconsistency in requirements while working on test scenarios that were not noticed earlier in the process (an argument to get testers involved early as well as a warning on expecting 1:1 relationships between documented requirements and test cases).
There are some features that are nice to have, for example, quick graphing tools to give a dashboard view of defects discovered or test cases executed. While these may be important to management, team, organizational and project, care should be taken that these “cool” features do not carry more weight in the decision around tool acquisition and use over the features to be used by technical staff on a day-to-day basis.
If that happens, I suspect that the staff will use the tool enough to show they are using it, hence following the “letter of the law” for process, but the information gained from the dashboard views may not be as realistic as one might want to believe. Not only will people’s behavior change based on the measures in place around their behaviors (the metrics around testing for example) but how they report activity may also be modified.
This was first published in February 2012