Do you think test reporting in software testing has to be done with a written document at the end of the project
or how could you report the test results? What's your opinion on this?
When I think of traditional test reporting in the software testing industry, I think of a document produced by a group that is somehow at arm's length from both application development and management. It might be an external company, which is common in government projects, or at least a team with a different manager than the development manager, that considers itself the "test team" and has "test team meetings" that interact with development by a formal process.
The reason I say that is because if the groups are sitting and working together, they can simply talk about how things are going and show each other bugs. The "test reporting" is done via verbal conversation, which is usually my preferred way of working.
My preference is for an engaged customer and constant collaboration, frequent deploys to production, shared understanding of what to build, and high-quality code from the first build deployed to independent test. Sometimes, that just isn't possible.
When communication is from arm's length or the project is developed "integrate last," or, for some other reason a test report makes sense, well, then, a test report makes sense. In that case, I'd like to do it well.
We want to provide good information. The information should be valuable and actionable. It should be easily understood as we intend it. There should not be so much information that the reader is intimidated. And we should not put the reader to sleep. My preferred way to do this is a combination of prose English, diagrams, screen captures and, yes, relevant numbers. Not "metrics," but relevant pieces of data.
When I structure the document, I put the most powerful information first, so that an executive who only has five minutes to read it can get enough value to make a reasonable decision.
Here's one potential software test results template:
In one paragraph or less, tell the executive what you think of the software in a way that will enable him to make a decision. This can include progress made on functionality as well as user experience, security status or anything else important to the project.
Elaborate on the “showstopper” bugs and serious issues here in plain text. For each bug, give a quick summary first and then a little more detail. You might consider a numbered list, with putting the "worst first. Carefully balance length with information. I typically list the top five to eight bugs. The executive can look at this list and make the decision on whether to move forward or go back and fix these issues.
Characterize the medium- and low-priority bugs. If the bugs are roughly equal, you might put a number to them, for example: "We know of three other moderate bugs and five other minor bugs."
Find some way to articulate which parts of the application got how much time testing. James Bach's Low Tech Testing Dashboard is one place to look. The low tech testing dashboard can be surprisingly easy to implement in Google's online spreadsheet, and screen captures can be added to a report.
What we could do with more time
Let the executive know what you would look into next if you had more time to test. Give them your thoughts on what value that might offer. This will help the executive decide if she should give you more time and resources to work with.
Our test strategy
Let the executive know what you did with the time you had. This could include both coverage and methods. Let her know if you used scripted or exploratory testing and in what proportion. Let her know what tools you used and why. If the document is getting long and this has low value, I might leave it off.
Why we picked our test strategy
Let the executive know why you think that use of time was appropriate. If the document is getting long and this has low value, I might leave this section out.
Where to go for more
Provide links to the bug tracker, session notes and other materials in case the reader has time and interest.
Do you have a question for Matt Heusser or any of our other experts? Let us know and we'll post your answer in a future response. Email the Editor@SearchSoftwareQuality.com or leave a question in the comments.
Dig deeper on Software Project Tracking and Reports
Related Q&A from Matt Heusser
You can't measure user experience with a ruler, so measuring user experience objectively is generally a matter of aggregating subjective opinions.continue reading
Tracking defects can benefit the development team if done for the right reasons. Explore the reasons to track defects in this expert response.continue reading
An Agile retrospective can be improved by rephrasing negative comments as positive analysis, but banning them altogether may be going too far.continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.