red150770 - Fotolia

Get started Bring yourself up to speed with our introductory content.

How to write test cases, one component at a time

QA engineers must design good, effective test cases, a task which requires detailed information at the start. Here's how to write thorough, maintainable and automatable test cases.

Efficient and effective test cases ultimately result in high-quality tests. QA engineers must design each test case to fully reflect the application features and functionality under evaluation. But it's even more important to write test cases that provide all the information needed to execute the test right -- and, if applicable, automate it.

With practice and experience, engineers can design effective test cases. Learn how to write test cases by putting extra attention on the parts and process. Test case components include:

  • Test name
  • Test ID
  • Objective
  • References
  • Prerequisites
  • Test setup
  • Test steps
  • Expected results

What test case components entail

Name. The test name or title should describe the functionality or feature that the test verifies. Describe the action as thoroughly as possible, without being needlessly wordy. For example, Create Customer Account is more descriptive than Create Account, especially if the application offers different types of accounts.

ID. Test ID is usually a numeric or alpha-numeric identifier that QA engineers use to group test cases into test suites. Either the test management tool automatically generates the ID format, or the test lead determines one to follow.

Objective. The objective, sometimes called description, is one of the most important components of test case design. The objective explains exactly what the assessment intends to verify, typically in one or two sentences. If more explanation is required, this complexity might indicate that QA needs multiple test cases, instead of one, to cover the functionality.

References. References are links, also thought of as callbacks, to user stories, design specifications or requirements that the test is expected to verify. Each requirement typically includes multiple test cases. Additionally, requirements often state that functionality should undergo at least one positive test and one negative test. The positive test validates that the functionality works as expected, while the negative one ensures that the feature does not do something that it shouldn't. For example, if a name field should contain alpha characters, the negative test validates that the field rejects numeric or special characters.

Prerequisites. Prerequisites describe any conditions necessary for the QA engineer to execute the test. A prerequisite might include the dummy data in a certain state, or that another test case must run prior to this one's execution. Be mindful that prerequisites for test cases can cause issues in a test suite, as when a failure of the first test blocks subsequent ones from running. This dependency across test cases is especially problematic when automated tests run without human intervention.

Setup. Test setup details what the test case needs to run correctly. The test setup information often includes application version, OS, security specifications, physical access restrictions, hardware information or date and time requirements, as well as any other equipment required.

Steps. Test steps are the sequential actions to execute the test. QA engineers should thoroughly detail test steps so that anyone, even someone unfamiliar with the application, can do them. Test cases typically contain no more than 10 steps. If the test case needs more steps, this is another sign that you might need multiple test cases to fully examine that particular functionality.

Results. The expected results outline how the application should respond to each of the test steps. When you write test cases, detail an expected result for every test step.

How automated test cases differ

Test cases that are candidates for automation require some additional steps or considerations from the team. To write automated test cases, engineers must explain test steps to the click level, and they should be atomic, meaning it is essentially a whole, self-contained process.

Write one test per test case, and make these tests autonomous so that the execution is not reliant on other tests' outcomes. Test cases that alter the environment or data when they run should return those elements to their original states upon completion.

Write test cases in an action-oriented language with good grammar, and devise them from the user's perspective.

These principles apply for both manual and automated test cases, but they are even more critical for the latter. Automated tests run without human intervention, which means that any issues that crop up can cause the whole process to stop, and the disruption can extend to subsequent test cases, blocking them.

Test cases inevitably require updates as requirements change, so compose them in a way that makes the components easy to maintain. For example, if the expected results for a test case include amounts that might transform based on rules changes, validate that data by referring to a rules document or spreadsheet -- not the actual amounts listed in the test case. Also, develop a traceability matrix that links the test cases back to the requirements to ensure 100% test coverage.

Dig Deeper on Software test design and planning