By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
|Robin F. Goldsmith, JD|
Many software quality assurance (QA) and testing people, including some industry leaders, vociferously denounce test planning and design as worthless busywork that generates voluminous documentation pretty much for its own sake. They say that time spent writing plans reduces the time available to run tests, so it's better to skip test planning and design and just run tests.
Instead of "throwing the baby out with the bath water" -- recommending removing test planning and design entirely -- consider the possibility that it can be valuable if you avoid some of the following misguided ways that diminish its worth. In this two-part tip, I identify two of those bad practices. In this installment, I look at the situation in which test cases are confused with the test planning and design. In part two, I explore the drudgery of over-writing test cases.
Are your test plans really just test cases?
In my seminars, I often informally survey participants about what their organizations mean by "test plan." Usually, half describe their test plans as the set of test cases that they will execute.
While it's indeed valuable to have a set of test cases in hand, test plans that are only test cases miss the considerable additional important benefits of actually planning the testing.
A test plan should be the project plan for the testing project. Testing can be more effective when it's treated as a project, which in turn is a sub-project within the overall development project. Treating testing as a project provides the opportunity to take advantage of the various project management techniques which have been found helpful increasing the likelihood of success. It's generally recognized that planning is the highest-payback project success technique.
A project plan does the following:
- Identifies what needs to be done, how it can be done, and what it will take in terms of resources, effort, and duration;
- Describes the tasks, sequences and dependencies required
- Sets the timing needed to accomplish the work and mitigate potential risks.
The plan helps assure that you have what you need when you need it, guides execution- to make sure you've done what is needed, while identifying when things go off track and indicating needed adjustments.
By serving these purposes, a test plan helps the testing project succeed. It's the thought process that's important, not tons of verbiage. One writes down the test plan so the thinking is not lost or forgotten and so it can be shared, reviewed, and refined.
A written test plan facilitates scheduling resources and carrying out intended tasks without missing any or going out of sequence. The plan serves as a record of what's been done both for ascertaining project status and suggesting improvements which can be implemented during the current project or on subsequent projects. True agility involves writing no more than is helpful, but no less!
IEEE Std. 829-2008 is a standard for test documentation that clearly distinguishes between test plans and test cases. This is a somewhat controversial standard, because many people interpret it as a dictate for generating documentation. Instead, I use it to organize my thinking, making the writing incidental and just enough to be helpful.
As can be seen in the diagram, the standard suggests using four levels of test planning and design documents. We start with a Master Test Plan, which is the project plan for the testing project. It identifies the set of Detailed Test Plans, which taken together are what must be demonstrated to be confident that the overall testing project works.
Detailed Test Plans also are project plans, but for smaller sub-projects within the overall testing project: unit tests, integration tests, special tests -- such as usability, performance, and security -- system tests, independent QA tests and user acceptance tests. Detailed Test Plans indicate the set of features, functions, and capabilities that taken together must be demonstrated to be confident that the respective unit, integration, etc. detailed tests work.
For each such feature, function and capability, we can have a Test Design Specification. This is a little-known, but valuable, technique for identifying and dealing economically with sets of executable Test Cases that, taken together, must be demonstrated to show that the Test Design Specification works.
It's understandable why many people misinterpret the standard as dictating a huge amount of documentation. At first glance, the above description does sound like a lot of busywork. In fact, it's just the opposite. When used appropriately, the structure enables more efficient and effective testing, wherein important test conditions are less likely to be overlooked, forgotten, or skipped.
The key to using the standard's structure is to realize that each level is largely a list of items at the next level down. For example, identifying the needed set of detailed tests in the Master Test Plan doesn't take much time but pays off hugely by revealing big risks that otherwise often are missed.
Then, time is mainly devoted to those Detailed Test Plans which risk analysis indicates are the highest priority. These in turn are largely lists, of Test Design Specifications, which are themselves largely lists of Test Cases; and time is spent respectively on only those that deal with the highest risks.
The net effect is that the structure's efficient minimal but sufficient documentation enables much more thoroughly, reliably, and quickly identifying the set of most important Test Cases before getting into the more time-consuming aspects of creating the executable Test Cases. Effective test planning and design is like plotting one's route for a trip; it's a small amount of added effort that helps you get to your desired destination with the least time, effort, and aggravation.
Continue to learn about another test plan and design bad practice, over-writing test cases
About the author: Robin F. Goldsmith, JD, has been president of consultancy Go Pro Management Inc. since 1982. He works directly with and trains business and systems professionals in requirements analysis, quality and testing, software acquisition, project management and leadership, metrics, process improvement and ROI. Robin is the author of the Proactive Testing and (with ProveIT.net) REAL ROI methodologies and also the recent book Discovering REAL Business Requirements for Software Project Success.