iQoncept - Fotolia

What happens when acceptance criteria in software testing is missing?

You can't test something if you don't know what it's supposed to do. Often, testers have a very incomplete understanding of what they're testing. Here's how to fix the problem.

Acceptance criteria in software testing is so important -- what happens when it's not there, or is incomplete?

QA testers out there may have experienced this at one time or another: a user story, feature or bug fix with a one to five word description and no details, expected results, anticipated outcome, acceptance criteria or any sort of useful data to understand the fix, let alone create a valid test case for it.

If you've been in QA long enough, you've had your share of these. Empty or insufficient details are methodology-independent, and may occur anytime such a slip in protocol is allowed -- which is basically anytime it's not part of a government or regulatory-based change. In other words, these issues occur when there's not a significant business fine should the change fail. How many hundreds of defects have slipped through because of missing acceptance criteria in software testing?

I'd estimate at least 50 percent of the defects delivered to customers are a direct result of inadequate or completely absent descriptions and acceptance criteria. Many groups use Agile as the excuse for sloppy software change documentation, saying it's intended to be flexible. I'd contend that Agile's flexibility is in its ability to change a design that's not working on the fly, and to not have to completely recreate a new feature from scratch because the description and acceptance criteria in software testing were missing or invalid.

As the QA tester, how do you fill in the gaps? You need valid test cases. My first suggestion is to meet with the developer assigned to code the change. Find out what they think the requirements are, as well as the desired outcome. Create your base test cases around the developer's understanding of the change because that's how it'll be coded.

Next, take a step back and imagine how the change will fit into the existing software. Make a list of what other areas may be impacted, where messaging connections may fail and where database saves occur. All of these are potential failure points. It's critical for customers that QA discover all the failure points possible in the developer's coding.

Finally, meet with the product manager. Find out the business intent and see if there are any disconnects. To be even more effective, invite the developer and keep them talking. If the developer or QA can demo the change as it is coded so far, or demo a prototype, then you will likely get new or previously undocumented requirements from the product manager.

Why? It's the nature of human communication. We all interpret things to meet our needs, and those needs create different understandings. The QA team member needs to note discrepancies and lead the conversation to ensure what's being coded is what the customer is expecting.

Bridging the understanding gap and getting to acceptance criteria in software testing are essential goals to ensure the release of quality software with fewer bugs. They also make for much happier customers.

Next Steps

The best Software testing skills for newbies

A better software tester resume

Testers, are you sure you're in the right profession?

Dig Deeper on Software test types

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close