Tip

Don't mistake user acceptance testing for acceptance testing

Despite the many references that concur on the definition of acceptance testing, people still get confused. Scott Barber clarifies things in this month's Peak Performance column.

If you think software testing in general is badly misunderstood, acceptance testing (a subset of software testing) is even more wildly misunderstood. This misunderstanding is most prevalent with commercially driven software as opposed to open source software and software being developed for academic or research and development reasons.

This misunderstanding baffles me because acceptance testing is one of the most consistently defined testing concepts I've encountered over my career both inside and outside of the software field.

First, let's look at what Wikipedia has to say about acceptance testing:

"In engineering and its various subdisciplines, acceptance testing is black-box testing performed on a system (e.g. software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery… "In most environments, acceptance testing by the system provider is distinguished from acceptance testing by the customer (the user or client) prior to accepting transfer of ownership… "A principal purpose of acceptance testing is that, once completed successfully, and provided certain additional (contractually agreed) acceptance criteria are met, the sponsors will then sign off on the system as satisfying the contract (previously agreed between sponsor and manufacturer), and deliver final payment."

This is consistent with the definition Cem Kaner uses throughout his books, courses, articles and talks, which collectively are some of the most highly referenced software testing material in the industry. The following definition is from his Black Box Software Testing course:

"Acceptance testing is done to check whether the customer should pay for the product. Usually acceptance testing is done as black box testing."

Another extremely well-referenced source of software testing terms is the International Software Testing Qualifications Board (ISTQB) Standard glossary of terms. Below is the definition from Version 1.3 (dd. May, 31, 2007):

"Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system."

I've chosen those three references because I've found that if Wikipedia, Cem Kaner and the ISTQB are on the same page related to a term or the definition of a concept, then the testing community at large will tend to use those terms in a manner that is consistent with these resources. Acceptance testing, however, is an exception.

There are several key points on which these definitions/descriptions agree:

  1. In each case, acceptance testing is done to determine whether the application or product is acceptable to someone or some organization and/or if the person or organization should pay for the application or product AS IS.
  2. "Finding defects" is not so much as mentioned in any of those definitions/descriptions, but each implies that defects jeopardize whether the application or product becomes "accepted."
  3. Pre-determined, explicitly stated, mutually agreed-upon criteria (between the creator of the application or product and the person or group that is paying for or otherwise commissioned the software) are the basis for non-acceptance. Wikipedia refers to this agreement as a contract, and identifies that non-compliance with the terms of the contract as a reason for non-payment. And Kaner references the contract through the question of whether the customer should pay for the product. The ISTQB does not directly refer to either contract or payment but certainly implies the existence of a contract (an agreement between two or more parties for the doing or not doing of something specified).

With that kind of consistency, you'd think there wouldn't be any confusion.

The only explanation I can come up with is that it is related to the fact that many people involved with software development only have experience with "user acceptance testing," and as a result they develop the mistaken impression that "user acceptance testing" is synonymous with "acceptance testing." This impression is reinforced by the fact that over the past several months, SearchSoftwareQuality.com experts have been asked a shocking (to me) number of questions related to acceptance testing:

If my experiences with software user acceptance testing are common, I can understand where the confusion comes from. All of the software user acceptance testing that I have firsthand experience with involves representative users being given a script to follow and then being asked if they were able to successfully complete the task they were assigned by the person who wrote, or at least tested, the script.

Since the software had always been rather strenuously tested against the script, the virtually inevitable feedback that all of the "users" found the software "acceptable" is given to the person or group paying for the software. The person or group then accepts the software -- and pays for it.

I have no idea how many dissatisfied end users, unhappy commissioners of software and unacceptable software products this flawed process is responsible for, but I suspect that it is no small number.

There are several flaws with that practice, at least as it relates to the definitions above.

  1. The "users" doing the acceptance testing are not the people who are paying for the development of the software.
  2. The person or group paying for the development of the software is not present during the user acceptance testing.
  3. The "users" are provided with all of the information, support, instructions, guidance and assistance they need to ultimately provide the desired "yes" response. Frequently this assistance is provided by a senior tester who has been charged with the task of coordinating and conducting user acceptance testing and believes he is doing the right thing by going out of his way to provide a positive experience for the "users."
  4. By the time user acceptance testing is conducted, the developers of the software are ready to be done with the project and the person or group paying for the development of the software is anxious to ship the software.

If that is the only experience a person has with acceptance testing, I can see why one may not realize that the goal of user acceptance testing is to answer whether the end users will be satisfied enough with the software -- which obviously ships without the scripts and the well-meaning senior tester to help out -- to want to use and/or purchase it.

I have no idea how many dissatisfied end users, unhappy commissioners of software and unacceptable software products this flawed process is responsible for, but I suspect that it is no small number.

In an attempt to avoid dissatisfied end users, unhappy commissioners of software and unacceptable software products, whenever someone asks me to be a part of any kind of acceptance testing -- whether qualified by additional terms like "user," "build," "system," "automated," "agile," "security," "continuous," "package," "customer," "business process," "market," or something else -- I pause to ask the following:

"For what purpose is who supposed to be deciding whether or not to accept what, on behalf of whom?"

My question often confuses people at first, but so far it has always lead to some kind of acceptance testing that enables decisions about the software and its related contracts. And that is what acceptance testing is all about.

 Scott Barber, software testerAbout the author: Scott Barber is the chief technologist of PerfTestPlus, vice president of operations and executive director of the Association for Software Testing and co-founder of the Workshop on Performance and Reliability.

Dig Deeper on Software test types

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close