User acceptance testing (UAT) frequently encounters a number of difficulties. Users often "don't have time" to...
participate in UAT, and when they do, UAT tends to involve considerable effort that still misses a lot of bugs. In turn, there's often a lot of finger pointing, which no doubt further contributes to the unpleasant taste often associated with UAT and further diminishes users' willingness to participate. The way to break this vicious cycle is to look at UAT from a different, more productive perspective. First we must recognize why UAT has problems.
Problem 1: How UAT is presented
First, it's important to realize how UAT generally is presented to users. Right off the bat, the term "user" invokes trouble. The folks we call users have real jobs that tie them up full-time doing whatever their organization actually is in business for. IT is there to support them, not the other way around. Being a user is not how they or anyone else except us thinks of them.
When business folks (for the sake of simplicity, I'll continue to call them "users," even though it's not really right) get called upon to participate in UAT, it can represent a huge demand for time in addition to their already-demanding full-time jobs. Moreover, not only is UAT not part of a user's typical job description, but users can legitimately wonder whether we are asking them to do something -- testing software we created -- that ought to be IT's job.
Consider also how the request to perform UAT commonly is presented: "Try this out. Go play with it." Users understandably could be apprehensive about participating in UAT because they may not know what to do or how to do it. How would they possibly know how to try out a new system or enhanced component they aren't familiar with? And, what do people play with? Is this system a toy? That's certainly not the implication of the demeaning "Don't do anything stupid" types of messages that often come between the lines when users are charged with performing UAT.
Problem 2: How QA/testing perceives UAT
Mainstream QA/testing literature says relatively little about UAT, which actually may be a blessing because so much of what is said is wrong. The testing establishment's common practices and view of UAT have several issues. Testing in general too often doesn't come into play until the very end of the development life cycle, with UAT efforts not beginning until right after completion of system testing. Such late attention to UAT often is a by-product of the widely held practice whereby UAT mainly consists of users executing a subset of the system tests. If those tests already have been executed by the development side, which includes QA/testing, users are unlikely to find appreciably additional defects.
Alternatively, many organizations' UAT involves users executing tests written by QA/testing but which were not included in system testing. Frankly, such tests could and probably should have been part of the system test, so it's hard to tell the benefit of users' executing the tests beyond simply having a larger set of system tests. In addition, QA/testing often tries to turn the users into little versions of themselves; but users generally think in terms of the work they need to accomplish, not how to be testers.
More user acceptance testing resources
However, the most insidious, but seldom-recognized weakness of traditional UAT, is the frequently articulated perception that UAT should mainly be sort of a rubber-stamp proof-of-concept positive test of functionality. I've heard this expressed as users need only run one test for each functional requirement or need to run just the use cases on which the development was based. The implication is users need not worry about negative testing, since presumably QA/testing already has covered it. How's that working in your organization?
A related issue arises when the organization does something else and calls it "acceptance testing." For instance, in Extreme Programming (XP), acceptance testing is more like typical integration testing, yes, possibly framed by the resident user but mainly focusing from more of a programming view upon the integrated functioning of the pieces being built. True UAT indeed can be done increment by increment, but many incremental development projects are likely not to think to include UAT as part of an increment, if at all.
Possibly in the name of "concurrency," or more likely just mindless rush, some organizations even may think they can do UAT and system testing simultaneously. True UAT should be final check confirmation that the version of code which goes into production is acceptable, but concurrent UAT bogs down the users in catching many of the same bugs system testing also probably is detecting, while assuring that the users actually are executing a code version that is one or two debug cycles removed from what presumably will be implemented in production.
Tips for successful UAT
Overcoming UAT's difficulties involves both procedural and attitudinal changes. As can be seen from the Proactive Testing Life Cycle in Figure 1, UAT and Development/Technical Testing should represent two independent paths. Although often not recognized, typical unit, integration, and system development/testing tests demonstrate that the developed system conforms to design. (Don't be misled by what is actually high-level design being called "requirements," which especially is almost always true of use cases that describe not the user's requirements but the usage requirements of an expected system design.)
In contrast, UAT should demonstrate that what the development process thought should be created is in fact what the business needs. Simply having the user execute tests written from the development perspective, which includes QA/testing, won't assure the development perspective is right. Although not a mere proof-of-concept rubber stamp, UAT should not be the place the organization relies upon to catch most of the defects. Rather, UAT is a form of self-defense that should be like the bottom of a funnel, double-checking already-checked code from a different perspective to catch remaining defects that development/technical testing still missed.
Proactive UAT includes two types of tests. First, UAT should include requirements-based tests that demonstrate the delivered system satisfies business requirements, which can be discovered much more fully than many organizations realize. Difficulties creating tests reveals business requirements that are not sufficiently testable, which is mainly a clarity issue that can be addressed. However, requirements-based tests, including those based on use cases created to guide development, are unlikely to reveal requirements that are wrong or overlooked.
Therefore, UAT also should include a second type of test based upon proactive user acceptance criteria. The role of the professional tester is not to define these but rather to facilitate the business/user/customer/stakeholder's definition, which is what they must see before they are willing to rely on and bet their jobs on the developed system.
Moreover, instead of the seldom-recognized but nonetheless disempowering ways in which UAT often is presented to users, the approach here is to empower the users with assessing whether the development is acceptable. Together, these attitudinal approaches dramatically increase the likelihood that users will want to participate in UAT.
Proactive user acceptance criteria provide three procedural advantages. First, they serve as a form of prioritization for identifying the most important things the executable user acceptance tests should demonstrate. (The executable tests are defined by coupling with the system design to identify how to demonstrate that the requirements-based tests and proactive user acceptance criteria have been satisfied.)
Second, proactive user acceptance criteria powerfully identify wrong and overlooked business requirements that need to be included in the requirements definition.
Third, along with the business requirements and requirements-based tests, proactive user acceptance criteria serve as input to designing both the system and the development/technical tests of the system as designed. While it is not appropriate for UAT to be based upon the development/technical view, smart QA engineers and testers will assure that their development/technical tests cover what the users are looking for.
Thus, planning and designing proactive user acceptance tests early not only creates much more thorough positive and negative user acceptance tests for execution later, but it also helps the development process produce better systems so that we're not relying on UAT to catch so many errors. Together, they result in far better systems. And the good news is that it takes no longer and costs no more to achieve such superior results than what we're accustomed to.
When users actively identify what UAT should address -- and then see how the tests address those concerns -- and when they truly are empowered, users become much more competent, confident, and thus committed to participating in UAT.
About the author:
Robin F. Goldsmith has been president of Go Pro Management Inc. consultancy since 1982. He works directly with and trains business and systems professionals in requirements analysis, quality and testing, software acquisition, project management and leadership, metrics, and process improvement. Robin is also the author of Discovering REAL Business Requirements for Software Project Success.