How do I design a test case?
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Hopefully, your design will result in practical, reusable test cases. I've outlined ideas to help you through the design process.
The first consideration is the audience. Who is going to read and use the test cases? If you're designing test cases for junior testers, experienced testers, or offshore testers, your test case design will vary. Think about your audience.
Are you writing cases so that new testers or offshore testers will have "all the information" they need contained in each test case? If yes, then test cases need to be fairly detailed. The test purpose for the test case should be clear. The test data might be provided with the test case; if you include the test data, then you need to identify whether the test data must be used "as defined" or if the test data supplied is an example.
Test data is such an important part of testing. If the test data written into the test case is an example of the type of data to be used and the tester has leeway to be creative and supply other test data, then the test case needs to identify this. Or you can state this fact for all the test cases defined and make sure this is clear to the team.
Let's look at an example. Imagine a test case designed to demonstrate that no two user accounts can be created with the same account credentials. With this stated test case purpose, as a tester, I would try any way I can think of to break this requirement. Maybe I'll create an account, deactivate the account, create another account with the same user credentials and then reactivate the first account. I'll be looking for ways to break the requirement. Compare this example to following a test case that states create a user account with the name Joe Smith, verify the account is created, then attempt to create another account with the name Joe Smith.
Since I want every tester to be creative in what they try -- a test purpose might be the most valuable aspect of a test case. And unless there is a reason to be prescriptive with the test data, I prefer to identify all test data as a variable. I want testers to think of additional ideas and to test with different test data.
Be clear when you need a test case executed as prescribed versus when you want a tester to be more creative. Be clear when you need the test data supplied to be used as defined versus when a tester can try different values. To clarify when test data was a variable, I once highlighted the variable test data in color.
Content is another consideration. Once you determine who will use the material, the tone and detail will be easier to set. How do you determine the content detail? If you work in a regulated environment and the product you're testing is subject to audits, then the decision has been made for you. You need to follow your guiding procedures and adhere to what is required. If you're working in a non-regulated, faster moving environment, then creating detailed step by step cases might result in throwaway test cases too cumbersome to update or be practical. The context of your environment and the specific product are factors to consider. Also consider the experience level of the test team both in terms of testing experience and specific product experience.
Format is another consideration. I personally don't feel the format matters very much; test cases should be about the content not the format. But you might have purchased a tool to store test cases and need to use the tool. You might have guiding practices that state test cases will be written in Word or Excel or Test Director. I like to make sure that the format I use is practical for testers and that the format doesn't cause additional work or otherwise distract testers from their overall goal is testing the software.
A final consideration might be ranking of test cases. The more test cases you have, the more test case maintenance becomes a factor. You might consider ranking test cases are you build cases understanding that both the product and the test cases will evolve over time. By ranking, I'm referring to a method to identify which test cases are critical for execution and which cases are less important to execute. If a test case has been designed for regression testing of each release and the test case covers critical functionality than you might consider grading or ranking the test case.
I once solved the ranking problem by marking each test case as a level 1-3. Level 1 cases needed to be executed for each build since the cases checked core functionality, level 2 cases where selected based on functional changes to the product and level 3 cases were used when a full regression suite was run. You will likely review all your test cases and reassess execution priority based on each product release but as you develop test cases; you might think about how critical each test case is to the product as a whole versus focusing only on the current release.
In sum, think practical and reusable. Detailed enough for your audience. Each test case should have a purpose and each test case should denote whether the test data supplied should be used "as is" or if the test data shown is an example. And the best format to use is one that's maintainable.
Dig Deeper on Software Test Design and Planning
Related Q&A from Karen N. Johnson
There are so many resources out there about the ever-changing world of Web design and mobile testing, but to choose the most salient and insightful ...continue reading
In this expert response, consultant Karen Johnson describes strategies she uses for browser compatibility testing. Experience and knowledge of common...continue reading
Initiating test automation on your project team may seem challenging, or even overwhelming. Fortunately, expert Karen Johnson has been through this ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.