By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
There are many types of software behavior that you can test using user interface (UI) tests, but just because you can doesn't mean you should. Testing business logic is almost always better tested with unit tests, integration tests or even manual tests, such as a visual checklist. If you test the wrong things by using UI tests you will end up with brittle, complex and ultimately untenable tests. On the other hand, UI testing can be beneficial in testing what it is meant to test: the user interface.
The most important thing that UI tests do is to navigate the application. Neither unit tests nor integration tests ever navigate the application. Another important thing UI tests accomplish is to demonstrate that the proper elements are shown to the user at the proper times. As such, well-designed UI tests are shallow; they should exercise business function only to the extent necessary to validate proper navigation and display. When UI tests descend to the level of checking business functions, they become more and more difficult to maintain.
What follows is a list of things not to test with UI tests.
One thing computers are good at is math. But it is almost never the case that a computer crunches numbers at the UI level. Take for instance an application that calculates your potential 401(k) benefits. It gives you the ability to add certain amounts of money to your account, to set an interest rate and to set a time frame to operate. Given all this information, it then presents you with the amount your account will be worth at the end of that time.
In a well-designed system, the addition operations will be tested with unit tests; the interest rate operations will be tested with unit tests; getting proper results within a particular time may be a combination of unit tests and integration tests, and presenting the final result should be covered by an integration test.
The UI test should validate that each input is available to the user; that the proper error messages are displayed to the user; and that some result is calculated and displayed correctly. Attempting to validate in the UI that the mathematics of the calculations themselves are correct would be incredibly wasteful and expensive.
Don't create it if you can't delete it
UI tests are almost never appropriate for write-only data. Besides the hassle of managing the creation of unique input data for each run of the tests, there is also the hassle of managing the buildup of obsolete test data in the test system itself. Far better to do this validation at the unit or integration test layer, where explicit setup() and teardown() methods make this sort of work much more efficient.
There are some exceptions to this rule, and the exceptions are guided by a single principle: keep it simple. For example, if a ne test environment is reliably created from scratch for each test run and sufficiently cleaned up after test execution, a UI test for write-only data may be appropriate. Conversely, if the test environment is known to be stable for long periods of time, it may be appropriate to use a UI test to validate messages for error conditions.
Repetition of steps
Having a set of UI tests that all accomplish the same navigation through the UI is not only wasteful but dangerous. A set of tests that navigate the same path for some series of steps before branching off to validate particular features of the application are vulnerable to a single point of failure anywhere along the shared path. If Step 2 of a test fails, then every test fails, rendering the entire suite worthless. I call this a 'tree-like' UI test design, and it is a smell.
Instead, UI tests should start at various points in the application, and each test should validate a unique set of UI elements to the extent possible. I call this a 'web-like' design. Not only is it more efficient and less expensive, because each test is executing a unique set of validations, but it is also robust, because any failure in one part of the application does not render the rest of the test suite useless.
Any UI test that alters the state of the application permanently should be considered suspect. Examples of state change would be things like dates, user settings or switches that control behavior. Tests that change state permanently or unreliably are almost always better handled at the unit or integration test level. Again, the rule of thumb is: Can this test be maintained? If not, it may be better removed from the UI test suite and tested elsewhere.If one test changes the state of the application and causes subsequent tests to fail, it may not be worth writing the test. If the test changes the state of the application and changes it back, it might be worth writing the test; but not if that particular test fails often, thus rendering subsequent tests worthless. @69878
The best approach to UI tests for those controls that alter the state of the system is to validate that such controls exist and that they may be manipulated by the user. Do not actually have the test hit "go" and alter the state of the system unless the test can continue until the system is restored to its original state.
Mike Cohn and Jason Huggins both simultaneously invented a way to describe a good test system in terms of a pyramid: the bottom layer is unit tests, of which there are very many; the middle layer is integrations tests, of which there are quite a lot, but fewer than unit tests; the top layer is UI tests, of which there are even fewer.
More important than how many tests exist though, is the idea of a separation of functions. UI tests should be shallow, and to the extent possible, validate only that the user may see and manipulate the proper messages, elements, and controls. Validating the deeper functions of the system is the work of the unit tests and the integration tests.
About the author: Chris McMahon is a software tester and former professional bass player. His background in software testing is both deep and wide, having tested systems from mainframes to web apps, from the deepest telecom layers and life-critical software to the frothiest eye candy. Chris has been part of the greater public software testing community since about 2004, both writing about the industry and contributing to open source projects like Watir, Selenium, and FreeBSD. His recent work has been to start the process of prying software development from the cold, dead hands of manufacturing and engineering into the warm light of artistic performance. A dedicated agile telecommuter on distributed teams, Chris lives deep in the remote Four Corners area of the U.S. Luckily, he has email: firstname.lastname@example.org.
Dig Deeper on Software Usability Testing and User Acceptance