|Robin F. Goldsmith, JD|
In many organizations, test cases are written with extensive procedural detail, such as, "click here, hit tab, left arrow, right arrow, get up, go to the coffee area, pour a cup of coffee, add two bags of sugar, stir don't shake the coffee, put a lid on the coffee, neatly place the empty sugar packets in the recycling bin, return to your desk, sit down, adjust your chair, take a sip of coffee" and so on.
I'm not being facetious. While they may not get into refreshment breaks, many test scripts are loaded with comparably detailed step-by-step procedure. I actually read a book on testing which described defining tests one input keystroke at a time. This is a bad practice that leads testers to believe that creating test plans and designs is just busywork, and one I'll dissect in this tip. I identify and discuss another such bad practice, confusing test cases with test plans and design.
I've heard many people -- especially 'higher ups' -- say that test cases must be written in this tedious manner. Their motivation is to enable the tests to be repeated identically, typically by low-priced people who probably don't know anything about the application under test.
Why too many instructions mess up test cases
Let's count the ways that this belief and practice is counter-productive.
- All the detail makes each test take an inordinate amount of time to write, so you end up with far fewer tests.
- When code changes, and it will, each test takes longer to change, too.
- A high-priced test designer will probably have to spend more time working on a low-priced test execution, and that probably does not pay back from any perspective.
- Spending too much time and money overwriting such tests will, ironically, reduce tests created and run, so you're less likely to catch bugs.
Let's look at that last point more closely. Effective testers know that tests catch more of the important bugs when the tests exercise the system the way it actually will be used. Real users don't follow procedural scripts, so by definition testers mechanically following keystroke-level instructions are not exercising the way the system really will be executed.
In general, testers using such procedural scripts tend to blindly follow the script and will detect only defects which the script specifically invokes. There's a good chance the script is aimed at the very things the developer already has made sure work. Conversely, such scripts are less likely to reveal things the developer didn't invision.
Effective testers find additional defects because they go beyond the letter of their planned tests. In contrast, testers executing keystroke-level procedure tend not to observe or act beyond their very precise instructions.
If identical repetition really is an objective, then automated execution is probably the best approach. Automated test execution tools will execute identically without making mistakes or complaining. Also, automation reduces cost per execution with each added execution; whereas, without automation, you have to keep paying a human tester for each repetition.
It's a false economy to rely on testers who don't use the system the way a real user would. Money is better spent training the testers in the application than writing excessively procedural test scripts.
Moreover, you can write many more and yet also better tests by concentrating on identifying the conditions that must be demonstrated to be confident it works. A test case at its essence consists of inputs and/or conditions and expected results. The Test Case Specification describes these in words. The Test Case Data Values are physically input and matched against actual results.
The Test Case Specification usually has a one-to-many relation to the Data Values, so they should be kept separate to avoid redundant writing. Similarly, procedural descriptions should be kept minimally functional and also separate, because they too have a one-to-many relation to Test Cases. Keeping these elements separate also facilitates automating the test execution and keeping the data values in a file, spreadsheet, or database.
Learn more about test plan and design bad practices in Robin Goldsmith's tip on problems in confusing test plans with test cases
About the author: Robin F. Goldsmith, JD, has been president of consultancy Go Pro Management Inc. since 1982. He works directly with and trains business and systems professionals in requirements analysis, quality and testing, software acquisition, project management and leadership, metrics, process improvement and ROI. Robin is the author of the Proactive Testing and (with ProveIT.net) REAL ROI methodologies and also the recent book Discovering REAL Business Requirements for Software Project Success.
Dig Deeper on Software Test Design and Planning