Q
Manage Learn to apply best practices and optimize your operations.

Tips for making software QA testing reports acceptable

Why don't users seem to appreciate typical software QA testing status reports?

Software QA testers often are surprised when their regular quality status reports fail to resonate with business...

users. Frequently this is an example of the tester not understanding how to shape his communications to his audience.

In my experience, software QA testing status reports typically present information that is mainly of interest to those managing and conducting the QA testing. Because a QA tester's work largely involves executing tests and reporting defects, QA testing managers keep measures of these activities and results so they can assign staff accordingly to get the work done on time.

Activity measures include numbers of tests planned, prepared, run, passed, failed and blocked. However, managers and users outside of software QA testing are unlikely to have a context for understanding such measures. They have no way of telling from counts alone what the significance of the testing is, and whether it's too much, too little or just right.

Results measures usually list the number of defects detected, ordinarily by severity level. Again, users and managers may have only the vaguest idea of how to interpret these measures. Certainly the presence of high severity defects should raise concerns, although it may be unclear what those concerns should be; but on the other hand, the meaning of their absence may be equally unclear.

Instead of reporting the same old generic measures, consider tailoring separate reports for users and managers. Users want to know whether they can confidently use a particular software to help them do their work. Therefore, relate the tests run and defects found to the work they need to do. Explain how broadly and deeply each requirement or business function has been tested and the significance of detected defects. Be sure to distinguish defects that have and have not been corrected and retested successfully.

Managers tend to be concerned with delivery of the various components being developed. Thus, report to managers the same type of coverage and confidence information, but with respect to the components rather than to the requirements or business functions the components address.

Although shaping reports to your audience is necessary, it may not be sufficient for communicating quality status with users. That's another aspect of quality developers and software QA testers like ourselves tend to not be aware of. That's because we think of software quality mainly as a lack of defects, whereas business users may have a much different perspective.

Think of two well-known red-bearded restaurateurs: the Burger King and Chef Mario Batali. Most people would say that Batali's restaurants serve far higher quality meals than Burger King. Yet, Burger King has a more fine-tuned and consistent production process that probably permits far fewer defects -- such as burned or undercooked food -- than Batali's fine restaurants. Although the presence of defects certainly would affect Batali's quality, our assessment of Batali's quality vs. Burger King's is, for the most part, based on positive rather than negative factors. I contend that software also has positive quality factors representing real business requirements seldom taken into account by developers and QA but which can be very important to users.

Next Steps

When it comes to QA testing, think critically

This was last published in May 2015

Dig Deeper on Software Project Tracking and Reports

PRO+

Content

Find more PRO+ content and other member only offers, here.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Join the conversation

12 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How effective have your QA testing reports been?
Cancel
I'm in software QA, but I'm not subjected to the torture of writing testing reports. Is it still common to have to do that? As a team that tries to be agile, we try to put "working products" over "comprehensive documentation".  Does anyone actually enjoy creating and maintaining documentation? I sure don't. 

I create testing charters, but they are more for my own purposes than anything. Some amount of documentation helps me remember things. Typically no one else reads these charters, though. I'm lucky to work in an environment where I'm trusted to do a good job, and not required to spend my time writing reports.
Cancel
For a project that i work on where many Business units such as Marketing, Sales, Customer service are impacted it makes a lot of sense to tweak my status report. I am going to pilot this idea in the next release of my project and see the feedback.

Cancel
Effectiveness of QA Testing reports absolutely depends on what you write in them. My experience with typical template based reports is not that good as numbers usually don't tell much about anything (other than fake feeling of security) to people who matter. What business problem you are solving (and status of it) with your testing is something that matters. If you can explain that in your test report, I'm sure that people will start seeing value it in. You may want to read "Test Framing" by Michael Bolton to get an idea of points I'm trying to make here.
Cancel
Lalit is right, a lot gets sacrificed in readability for 'one size fits all reports tailored to templates'.   A lot of the metrics are around things that don't really help managers, and it doesn't help communicate testing's story.  Testers need to learn how to tell their stories, and find ways to engage beyond the meaningless metrics that the Factory school wants to throw at you.
Cancel


Let’s be
careful not to fall into the trap of assuming metrics, templates, reports,
documentation, etc. are bad because some examples of them may be ill-conceived
or poorly executed.  Templates can be
helpful for reducing oversights and aiding communication with standard content/formats;
but templates are not a substitute for understanding.  @
abuell,
you may be thinking of testing reports that are different from the type of
status reports the article discussed and which are intended to inform various
audiences other than just the QA/testers. 



Cancel
That may be true Robin, but in my experience, these templates become 'rote' routine, copy and paste buffets that end up being written for the sake of being written, and often do not serve any real purpose that's tangible.
Cancel
@Veretax,
I agree alas that
ill-conceived or poorly
executed templates are exceedingly common, but that does not mean all templates
must be so.
Cancel
We don't have written QA reports. Our reports tend to take the form of a discussion with the developers and program manager to determine the status of the program holistically. We examine specific defects together, and talk about what features have been tested and the defects found within that features, but rarely are asked to discuss the specifics of how a feature was tested. Since they are discussions, they are exactly as effective as required. Devopment and PMs trust our judgement in testing and our input in these discussions.
Cancel
Somewhat good tips. Essentially, the article suggests to tell the story and to frame it for the context. What's important to add is relationships factor - earning trust.
Cancel
The author is right, If all you report is on 'test plan' and 'performance of plan' and 'defects found or verified'  You are probably wasting your time.

How about framing your story as follows:

1. What work was just done that hasn't completed testing yet (Your Testing Back log)
2. What work is in past the first quality gate with the dev team (and ready for tester evaluation) (Pending in sprint work)
3.  What work is in progress or planned, but not ready to test.

Then consider the following factors

What features have had serious defects found that need addressed before release  (minor defects may be important to, but are often easy to cull if time constraint require it.)

What testing is blocked, and why?

What parts of the code have passed testing and are ready for production deploy  (note that parts that need to be deployed at the same time should all be in the same state)

That's it.  anything more might include para0functional reports on performance or security concerns, or usability and accessibility.



Cancel
@ Veretax, thank you for your elaboration good points. I’ll just caution that attention to work products may be more meaningful to the project manager, which may be an improvement, but still may not be all that meaningful to higher-ups and business folks.
Cancel

-ADS BY GOOGLE

SearchMicroservices

TheServerSide.com

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

DevOpsAgenda

Close