Problem solve Get help with specific problems with your technologies, process and projects.

Don't write simplistic test cases

What is a "simplistic" test case and how can you avoid writing them?

Cem Kaner has said, "Don't write simplistic test cases." It will be a great help if you can help or provide information about how to identify simplistic test case.

For example, say I am writing automation test cases for GUI testing like checking visibility of buttons or checking field validation like whether the field is mandatory or whether the field accepts numbers or characters. Will this sort of test case fall under simplistic test cases, and do we need to automate these test cases? I can understand it depends on the stage at which currently we are planning to automate but I would like to have a better explanation for the same. Thanks in advance.

I find it curious that you have a question about something Cem said and you didn't contact him for clarification. Here's my opinion on what Cem meant:

I would view a test case to be simplistic in one of two ways. The first way is a test case that fails to challenge. For example, if you were testing the login page of a Web application, a simplistic test case would be to test using one set of an account name and password values and logging in successfully. Another example would be challenging the login process but with only one incorrect value in one of the fields. A robust test would enter an incorrect value in each field, and then an incorrect value in both fields at the same time. The test could continue to check the number of times incorrect values could be entered before locking out an account. Additionally, robust tests would try different types of incorrect values such as no entry, a numeric value where the field is alpha, entries that challenge the maximum and minimum values, to name a few.

Let's continue with this login example and consider automation. If you were automating the test, a simplistic test might fail to code the data entry fields as variables and instead the test would use one hard-coded set of values. A more robust test would be to set both fields as variables and to provide numerous entries to iterate through. We can return to your example of data entry validation. Testing alpha, numeric, boundaries, values with left spaces, decimals, negatives, special characters, may all be entry values to consider depending on the data entry field.

For the other example you gave regarding checking the GUI for buttons, a simplistic case would check the visibility of the existing buttons as the buttons currently exist and are labeled. A more robust test could iterate through each button on the page and return the text label for each, checking both the text of the label and the visibility of the button. You can extend the test further if there is a condition under which one, more, or all of the buttons should be disabled and you test not just the existence of each button but whether each button is enabled or disabled under a set condition.

Whether you need to automate a test to check the buttons on a page depends on how important the buttons are to the application. For example, I was testing an e-commerce site and found the buy button wasn't appearing on the Mac. Sometimes checking the existence of a button could be an important test.

I think the essence of what Cem was trying to convey is to think beyond a singular condition; think past one simple test and build tests that are robust and extensible.

After writing my reply, I contacted Cem and asked if he would like to comment on the question. Following are his comments:

We write tests to learn things about the application under test. A test that provides no information has no value. A test that provides little information has little value, perhaps less than the cost of creating and maintaining the test.

Every aspect of a program carries some uncertainty. We can never be absolutely confident that any aspect of a program will work. Only God knows whether the program will operate correctly on every set of data, in every environment, under every circumstance, every time. However, we can be relatively confident. We can conclude that there is very high uncertainty associated with one feature and very little uncertainty with another. We can conclude that a group of features has sufficiently little uncertainty that people should be willing to bet their lives on the correct functioning of the software. (For example, people bet their lives on the software that controls their car's fuel injectors and brakes.) We have a lot of confidence in this code -- but not absolute.

Testing is an empirical method for reducing uncertainty. (We run tests to learn things.) If a test doesn't address an uncertainty, it's not a test. If it addresses a matter of very low uncertainty, then running the test provides little information.

Consider a test that checks whether a button's graphic changes appropriately when you press the button. There is a time in development of the program when this is a high-uncertainty test. When I first write my code, for example, I don't know whether it works. This is a good example of something I'd want to address with unit tests. However, there isn't much uncertainty later and so a test this simple might be almost pointless. In addition, we're probably going to see this button in lots of other tests that involve the feature that is triggered when the button is pushed. You can check the graphic in any of these, if you want to, rather than maintaining a standalone test that merely asks whether the graphic changes the right way when you press the button. That simple test provides no additional information beyond what you should already get from your other tests.

Dig Deeper on Topics Archive

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.