What should test team leads keep in mind when they're thinking of how to write test scripts in cases where other testers will execute them?
Many test organizations train their test teams how to write test scripts by telling them to create precise instructions, sometimes down to the number of tabs to be hit to navigate from one field to another. They also tend to favor very precise values to be entered in the UI or whatever the data source is. This gives very easy-to-define and predictable results in the "Expected Results" field for each step.
Anything that does not match the described Expected Results is a variance, possibly a bug, and the tester records the variance.
In some circumstances this is needed and works very well. In others this can reduce the test effort efficiency and lead to unwarranted confidence in the system. When conditions are encountered that are similar to, but not precisely the same as, those scenarios considered in the heavily scripted tests, often the result is the discovery of a previously undetected fault.
My concern is that exercising the same logic with the same variables over and over will quickly find all the bugs that will impact the scripted path as written, but nothing on either side of that path. When we vary from that path, we find more bugs in the code.
The challenge for me, and I expect most testers, is to provide enough detailed instructions and information around what the test is intended to discover. We must give information around what we wish to learn from the test, and what data or other pieces of information are needed for the project stakeholders to make an informed decision.
The challenge to doing this in every project for every test organization is that each project is different and the expectations of project stakeholders are sometimes ill-defined. What we can do is something less controlling and more liberating that may result in a broader range of understanding.
Here are some general guidelines I have found work reasonably well.
First, understand - to the best of your abilities - the concerns of the stakeholders, the development group and others involved in the project. Then, examine ways that the test paths or scripts you are working on will exercise each of these areas of concern, as well as the ancillary concerns they did not directly express but that touch on -or are touched by- the expressed areas of concern.
Give enough information to the people executing the tests to understand these concerns and what it is that you would like to have tested. Then, provide high-level paths to exercise the areas with areas of interest identified.
Instead of mapping out precise steps ("First do THIS, then do THIS and then doTHIS") invite the testers to consider ways to exercise the function using different paths. Invite the testers to participate in the mapping of these paths and provide them with the tools and the freedom to exercise the software rigorously, but not in the lock-step manner that many organizations depend on for "complete testing."
Instead of writing tests as highly detailed scripts -- "The Tests" -- I prefer to look at them as starting points that identify areas of interest to the project. Then, after working through the initial considerations, testers should have the freedom to work around these areas and see what happens when minor variations are introduced.
Have a question about software testing, application quality assurance or how to write better test scripts? Let us know and we'll pass your question on to one of our experts.
Dig Deeper on Topics Archive
Related Q&A from Peter Walen
Veteran software quality pro Peter Walen explains what software tester skills are really necessary in today's enterprise organizations. Continue Reading
Crowd sourcing can be a key piece of a test strategy for enterprise mobile apps aimed at customers, not employees. Continue Reading
How do you meet mobile app testing challenges without the right of mix of smartphones and other devices on hand? Continue Reading