Manage Learn to apply best practices and optimize your operations.

Software testing is improved by good bug reporting

Bug reports can be an excellent means of assessing how well you've tested a bug. Testing expert Scott Barber explains how good bug reports indicate good testing.

Scott Barber, software tester
Scott Barber

I recently completed (successfully, I might add) the second of the Association for Software Testing's all online, free to members Black Box Software Testing course. Each of these courses is four weeks in length. I've been involved with this program since years before it became a program, and I am an instructor for the first course in the series, called Foundations. For this course, called Bug Advocacy, I was a student.

Bug Advocacy focuses on the skills and concepts needed to compose high-quality, easily understood, appropriately compelling and well organized defect reports. I know, it sounds pretty boring to me too, but it was anything but boring. These classes are designed so that you watch recorded lectures (in this class the lecturer is Cem Kaner), answer some quiz questions (to make sure you watched the lectures), participate in class discussions, do both individual and group projects (in this class the project centered around evaluating and enhancing unconfirmed OpenOffice bug reports), peer reviewing one another's assignments, and taking a far-from-trivial closed-book essay exam. All in all, I spent about 40 hours participating in the class over the four week period.

This approach isn't just about writing a good bug report, it's about making sure you do the right testing after you find a bug.

There was one idea in particular from the class that I found absolutely brilliant and wanted to share with you. Below is actually a very lightly edited version of my answer to one of the exam questions asking us to describe a six-factor approach to bug reporting that Cem remembers using the mnemonic "RIMGEA." If you are a regular reader of mine, you know that I have a fondness for mnemonic devices, but that's not what I thought was so great about the approach. What I think is brilliant is that this approach isn't just about writing a good bug report, it's also about making sure you do the right testing after you find a bug to enable you to write a good bug report. Take a look -- you'll see what I mean.

  • Replicate -- Ideally the consumer of the bug report should be able to re-create the bug from the information contained in the report. Almost as good is the tester being able to re-create the bug on command. Less good, but acceptable, is admitting that you can't re-create the bug, but describing how you tried to re-create it. Bad is not trying to re-create the bug, not admitting when you can't, and/or not putting whether or not you can replicate the bug in the report.

  • Isolate -- Bug reports are more powerful and are generally taken more seriously when the steps to re-create the bug are simple. By isolating the actual bug -- separating it from any incidental or unrelated steps you took the first time you observed it -- frequently enables you to document the most direct (or at least a very direct) path to replicate the bug. It also minimizes opportunities for confusion in the report.

  • Maximize -- We can't test everything, so we sample. When we observe a bug with our sampling, odds are that we didn't randomly stumble upon the most severe incarnation of the bug using whatever sampling method we employed. Try to recreate the bug by varying your test along various axes (data, configuration, navigation path, among others). Focus your report on the "worst" version of the failure you manage to create.

  • Generalize -- Generalizing is the answer to "No user we like would ever do that on purpose." Demonstrating that a bug will be encountered by normal users performing normal activities will get more attention than demonstrating that an obscure bug may be encountered following an improbable path to accomplish a minimally important system function.

  • Externalize -- Nobody cares if a bug annoys a tester. People care if a bug will annoy someone who pays to use the software, or writes reviews about the software, or who is likely to pay you for the work you did to develop the software in the first place. Focusing your bug reporting on the impact to people who matter makes your bug report more potent.

  • And Bland-ize -- (In the video lecture this factor is "And say it clearly and dispassionately," but that was harder for me to remember.) Bug reports aren't personal. Neither are bugs. We are reporters, not accusers. Pointing fingers does nothing but exercise our fingers. I'm generally not a fan of emotional suppression, but during bug reporting, it's critical.

Software testing resources
Software performance testing: You can't test everything 

Exploratory and (not vs.) scripted tests

How testers can practice bug advocacy with developers

You simply aren't going to be able to apply these factors to your bug reporting without having done some good testing first. For whatever reason, I'd never thought of using bug reporting as a method of self-assessing the quality of my testing, but after taking this course I'm pretty confident that I'll be doing exactly that from now on.

(Footnote: For more information about the Association for Software Testing (AST) and the training courses they offer, visit their website at

About the author: Scott Barber is the chief technologist of PerfTestPlus, vice president of operations and executive director of the Association for Software Testing and co-founder of the Workshop on Performance and Reliability.

Dig Deeper on Topics Archive

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.