I have been now put into acceptance testing of a project. Before this I was into integration testing. Now that team is challenging me to find the maximum number of bugs to prove myself because of some conflicts between us. I really agree with your view towards acceptance testing. But we have been asked to design use cases and tests. I would like to face this so could you support me with your valuable tips to make this successful?
This is a sadly frequent situation. In my opinion, a good test script for user acceptance testing is similar to the following:
"I'm going to give you a brief demonstration of how this application works. Then I will provide you with a user's manual and some sample data (such as a list of products that have been previously entered into the system) and I'd like you go use the application to complete the tasks that you would use an application such as this for. As you work, please provide your feedback on this form."
The problem, however, is that this kind of feedback is not what most managers and stakeholders are looking for when they ask for user acceptance testing to be conducted. What they tend to be looking for is the answer to the question:
"Do the users of the system agree that we have met the requirements we were given?"
In an ideal world, high user satisfaction would map directly to successfully implemented requirements. Unfortunately, this is not often the case. Which leaves us with the dilemma of trying to balance the needs of the managers and stakeholders with one of the core ethical principles related to software testing as spelled out the ACM code of ethics, section 2.5 below (you can see the entire code of ethics reprinted on the at Association for Software Testing Web site)
"Give comprehensive and thorough evaluations of computer systems and their impacts, including analysis of possible risks. Computer professionals must strive to be perceptive, thorough, and objective when evaluating, recommending, and presenting system descriptions and alternatives. Computer professionals are in a position of special trust, and therefore have a special responsibility to provide objective, credible evaluations to employers, clients, users, and the public."
So what the question really boils down to is:
"How do I design user acceptance tests that both satisfy the needs of management to determine if the end users agree that the requirements have been met while also satisfying my obligation to capture the information I need to provide a comprehensive and thorough evaluation of the user's overall satisfaction with the product?"
Luckily, I believe the answer is easier than the question. In my experience, if you simply compliment highly structured, step-by-step user acceptance scripts, containing specific pass/fail criteria as derived from the system requirements with both the time and a mechanism for providing non-requirement-specific feedback, users will provide you with answers to both of the questions of interest. All this involves on your part is to encourage the users to, in addition to, executing the user acceptance tests that you provide, to use the system as they normally would and to provide freeform feedback in the space you provide in the script about their satisfaction with the application as it stands today. In this way, you will collect the pass/fail information that it sounds like your managers and stakeholders are asking you for, but also the information you need to be the user's advocate for changes or enhancements to the system that have resulted from unknown, overlooked, or poorly implemented requirements.