What's the exact difference between an error, a defect and a bug?
In their Black Box Software Testing course, Kaner and Bach define the following in their session on Bug Advocacy:
"An error (or fault) is a design flaw or a deviation from a desired or intended state. An error won't yield a failure without the conditions that trigger it. Example, if the program yields 2+2=5 on the 10th time you use it, you won't see the error before or after the 10th use. The failure is the program's actual incorrect or missing behavior under the error-triggering conditions. A symptom might be a characteristic of a failure that helps you recognize that the program has failed. Defect might refer to the failure or to the underlying error." (Text formatting is the authors, not mine.)
That seems as about as useful a definition as I've seen for an error and a defect. It's what came to mind when I read the question. However, I find I don't use the terms error and defect much when I talk about testing. When I do use the term defect, I'm normally referring to a record in a defect tracking tool like ClearQuest or Bugzilla. I normally just call everything a bug.
I like James Bach's and Michael Bolton's definition of a bug:
"A bug is something that bugs somebody who matters."
I find I use this definition for many reasons:
- It's easy to remember: I don't have to go look it up in a slide I saw six months ago (like I did with the above definitions). I can remember it and explain it to someone off the top of my head, with little to no effort and with no appeal to authority. I don't even need to attribute it to James and Michael if it's a hallway conversation with a programmer or manager.
- It's consistent with my experience: I find that this definition has applied to every project I've worked on. I've logged deviations from requirements that were closed as functions as designed. Those weren't bugs. I've logged inconsistencies in implementation that were closed as functions as designed. Those weren't bugs. I've even logged a security issue that allowed me to log into the production environment of a very large company without a user id or password. But that wasn't a bug either. None of those bugged the people who mattered. They only bugged me.
- It's simple to explain: When I tell someone a bug is something that bugs somebody who matters, about the only follow up question I get is "Well, who matters?" Everyone seems to intuitively understand that this definition has a ring of truth. I find that it keeps me out of debates on word definitions and spares me from appealing to authorities that no one agrees on.
Of course, this question might also be one of the most universally asked questions on software testing Web sites. A Google search returns tons of answers, citing everything from Wikipedia to British norm BS 7925-1. My advice is to talk to the people you work with to understand how they use the terms. If they use them differently then me (or British norm BS 7925-1), theirs is probably the opinion that matters.
- Web application security testing checklist
- How source code analysis improves application security
Dig Deeper on Topics Archive
Related Q&A from Mike Kelly
There are multiple ways performance testing can be handled on an Agile team. An expert describes the benefits of various approaches. Continue Reading
Creating user acceptance tests out of basic software requirements documents can be a daunting task. Expert Mike Kelly points out logical approaches ... Continue Reading
Expert selects preferred performance testing tools for data warehouse/BI software testing needs. Continue Reading