When I work with customers, I explain that User Acceptance Testing (UAT) is the nexus of product functionality,...
customer data and customer workflow. Generally, the first time these elements come together for observation is during UAT.
It's critical to understand the primary differences between functional testing, which concerns the small parts of an application, and UAT, which looks at the whole picture.
During functional testing, data is contrived. Each piece of data injected into an application is created for a specific test case. It can't be any other way, because a key element of risk-based testing is controlling, in a scientific manner, the environment where testing is carried out. In order to make debugging and root cause analysis possible, the data set is small -- often miniscule -- in comparison to the real application dataset.
In the UAT phase, data is no longer contrived. At the very least, it's usually a scrubbed or de-identified version of production data, if not a carbon copy of production data. The dataset is huge, and it's generally characterized by integrity failures, the presence of older, updated or upgraded data, etc. It's the difference between a practice session and a real run of the application. Be aware that there are a lot of artifacts!
In the UAT environment the focus turns to workflow. The question which is continually asked is either "Can we continue our current workflow with this new application?" or "How will we need to change our workflow to benefit from this new application?" Testing is carried out with a much deeper understanding of the day-to-day business processes, which results in a better understanding of the fit and impact of the new release.
When I explain UAT to users or customers, I use a theater metaphor, likening an application's functional testing phase to the various theater practices which precede the performance. These practices are focused, generally concentrating on a given scene. Frequent interruptions happen during early practices, or rehearsals, as the concentration and focus is on individual lines, expressions and the interaction between a few actors. Rarely is practice approached in an end-to-end manner. As the performance date draws near, however, practice metamorphasizes. Soon the entire theatrical troupe enters the dress rehearsal. Sets, costumes, makeup and scenery -- not necessarily critical to the practice phase -- are used. Then the play is performed over and over, end to end. This is the first time a character can be traced throughout the entire performance, from the curtain's first rise through the climax and conclusion of the performance.
In application testing, we constantly tear the application down into distinct parts. Our risk-based approach requires an emphasis on what has changed, and we drill into that. We often test in isolation, setting up scenarios with contrived input and exercising the application in ways which are a microcosm of its true lifecycle. In this mode, we do our work in ways similar to early theatrical practices, picking things apart and working on individual interactions. Our user acceptance testing is much like a dress rehearsal, where everything comes together. The application is run end to end. Data flows through the entire system.
In user acceptance testing, the application experiences a real life cycle, with users validating the experience. The focus isn't on individual components, but on what results when the whole application is exercised. The goal is to ensure workflow fits with the new metaphors introduced in the released. Whereas in the functional test phase the focus is to prove adherence to user stories or requirements, the focus in UAT is to prove out the application's ability to fit into the business workflow.
One of the key lessons which come out of the UAT phase is the picture which develops of how well the application was designed and implemented. There is a challenge here, because some user acceptance testing results in serious 'show-stopper' defects. Often, these flaws really aren't defects at all and are miscommunicated or misunderstood requirements. Often, the changes which arise from these discoveries are painful and significant. This, by the way, happens in theatre also, which is why Broadway-bound shows are performed out of town first; kind of like theatre's UAT phase.
In software development, Agile testing's approach on involving the customer early in the project can relieve development teams of those painful flaw discoveries late in the process. As soon as a feature is available, the goal is to have it reviewed by the customer. This not only shortens the feedback loop, but it enables the team to change course well before the final release date.
About the author: John Overbaugh, Director of Quality Assurance for Medicity, Inc., is a test leader with 13 years of experience in product and project IT, focusing on quality and defect prevention. John's background covers pretty much everything from consumer applications to high-availability enterprise server applications and highly scalable Web services. John's strengths and key experiences include test strategy, recruiting and hiring (especially for test), outsourcing/offshoring and the test process. His emphasis is effective and efficient software engineering.
John lives near Salt Lake City with his wife, Holly, and his three sons. When he isn't working, John enjoys the outdoors and is an avid photographer. In addition to providing expert answers on searchsoftwarequality.com, John blogs about testing, QA, and engineering at Thoughtsonga.com