What is the standard for user acceptance testing? Are there industry standard for UAT vs. SIT testing? What are...
the major differences? How do you determine where to draw the line or prevent too much overlap?
There are no industry standards for user acceptance testing (UAT) or system integration testing (SIT). User acceptance testing is testing conducted by users of the system. UAT can vary greatly based on user experience with the application and each person's experience with testing itself. Time is nearly always a factor in testing and time is a factor in UAT as well.
In my experience with UAT, the build or release is given to the users of the system late in the software development lifecycle (SDLC) under the theory that the users will test to confirm or accept the system. The flaw I see in this process is that since this cycle is typically a final step in an overall project, the users often have little time to test. More important is that if the users do find defects with the software, they are sometimes pressured to accept the software as is. In some cases, users have been waiting months for the software and for business needs are anxious to receive the application. These dynamics can add to the pressure for users to accept the software as is even if there are defects that prevent or inhibit the very functionality they have waited for.
Another drawback of UAT that I've experienced is users often don't know how to test and haven't been trained to think like a tester. They receive the software and are given time to test but often "arrive at the keyboard" without any ideas about what to do. Under pressure they will often execute happy path testing and don't try harder test cases or interesting test conditions not because they don't have these ideas but because they're not testers. They are not prepared and time and pressure can be great.
I point out the drawbacks because in any situation I can help users conduct UAT; I try to understand the specific project dynamics in order to help in a way that's adaptable and logical for the project. In one case, I contacted individual users in advance of UAT and suggested ideas for their testing and provided ideas for how they could prepare.
System integration testing is testing conducted by testers of the application. Testers who have been testing functionality as it's been delivered are usually prepared to see the application function as a whole integrated solution. SIT is often more technical and more prepared and it is testing designed and executed by testers who've become familiar with the types of defects the application has been prone to throughout the SDLC. In my experience, SIT is very different from UAT because of the test ideas, experience and point of view of the testers. Also the technical expertise between users and testers can be significant so the two teams are likely to find vastly different defects.
When it comes to testing, a little overlap could be reassuring and even desirable. But again, I don't find the defects the two teams find to usually overlap by much if at all. The fact that both forms of testing are later in the SDLC process is often the only common elements between the two forms of testing.
Dig Deeper on Topics Archive
Related Q&A from Karen N. Johnson
There are so many resources out there about the ever-changing world of Web design and mobile testing, but to choose the most salient and insightful ... Continue Reading
In this expert response, consultant Karen Johnson describes strategies she uses for browser compatibility testing. Experience and knowledge of common... Continue Reading
Initiating test automation on your project team may seem challenging, or even overwhelming. Fortunately, expert Karen Johnson has been through this ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.