adam121 - Fotolia
Software QA testers often are surprised when their regular quality status reports fail to resonate with business users. Frequently this is an example of the tester not understanding how to shape his communications to his audience.
In my experience, software QA testing status reports typically present information that is mainly of interest to those managing and conducting the QA testing. Because a QA tester's work largely involves executing tests and reporting defects, QA testing managers keep measures of these activities and results so they can assign staff accordingly to get the work done on time.
Activity measures include numbers of tests planned, prepared, run, passed, failed and blocked. However, managers and users outside of software QA testing are unlikely to have a context for understanding such measures. They have no way of telling from counts alone what the significance of the testing is, and whether it's too much, too little or just right.
Results measures usually list the number of defects detected, ordinarily by severity level. Again, users and managers may have only the vaguest idea of how to interpret these measures. Certainly the presence of high severity defects should raise concerns, although it may be unclear what those concerns should be; but on the other hand, the meaning of their absence may be equally unclear.
Instead of reporting the same old generic measures, consider tailoring separate reports for users and managers. Users want to know whether they can confidently use a particular software to help them do their work. Therefore, relate the tests run and defects found to the work they need to do. Explain how broadly and deeply each requirement or business function has been tested and the significance of detected defects. Be sure to distinguish defects that have and have not been corrected and retested successfully.
Managers tend to be concerned with delivery of the various components being developed. Thus, report to managers the same type of coverage and confidence information, but with respect to the components rather than to the requirements or business functions the components address.
Although shaping reports to your audience is necessary, it may not be sufficient for communicating quality status with users. That's another aspect of quality developers and software QA testers like ourselves tend to not be aware of. That's because we think of software quality mainly as a lack of defects, whereas business users may have a much different perspective.
Think of two well-known red-bearded restaurateurs: the Burger King and Chef Mario Batali. Most people would say that Batali's restaurants serve far higher quality meals than Burger King. Yet, Burger King has a more fine-tuned and consistent production process that probably permits far fewer defects -- such as burned or undercooked food -- than Batali's fine restaurants. Although the presence of defects certainly would affect Batali's quality, our assessment of Batali's quality vs. Burger King's is, for the most part, based on positive rather than negative factors. I contend that software also has positive quality factors representing real business requirements seldom taken into account by developers and QA but which can be very important to users.
When it comes to QA testing, think critically
Dig Deeper on Topics Archive
Related Q&A from Robin F. Goldsmith
Using a WBS can help make a big task like requirements easier. Expert Robin Goldsmith explains how developers and testers can make the most of this ... Continue Reading
How do you engage high-level business executives in the process of writing business requirements? Continue Reading
What is the value of online discussion forums? This expert sees the good and the bad in online forums. Continue Reading