Not long ago someone asked me, "If you were pressed for time and could only automate one short test, what would...
you do? In response, I said, "I would automate something that a user would find useful." This led to a discussion of what we came to call "useful paths." In this tip, we’ll be looking at ways to efficiently design your user interface tests so that you get the maximum return on your testing time and dollars.
The nuts and bolts
There is a significant amount of literature describing how to go about the raw mechanics of user interface (UI) test automation. The Page Object pattern is a popular approach, as is the GUI Map pattern, and there are many variations on such approaches that are well documented on the Web.
There is less information available about how to design good UI tests once one has a reasonable approach to the mechanics in place.
A simple approach
Abstractions like the Page Object and the GUI Map make it clear exactly what elements exist on the page, and allow the UI test designer access to each discrete element. There is a clear temptation to design a test for each element in the page and to then be done.
And designing a UI test suite element-by-element is a perfectly valid approach. It will likely yield a high degree of feature coverage. But thinking about UI test design at such a low level of granularity has two problems.
First, such a granular, low-level approach could very well entail more work than is strictly necessary. UI testing is by its nature, expensive, and if one were to design a suite that would, for instance, exercise every individual entry in every select box, such testing would be unnecessarily expensive. Far better in almost every case is to validate the existence of the correct choices in the select box itself, and then to have the test choose only one and move on.
Second, such a granular approach to automated test design could easily miss issues that occur because of the cumulative interactions among a variety of elements in the application.
In designing the tests themselves, take care to exercise each page element only to the extent necessary to validate the useful path itself.
Chris McMahon, SSQ Contributor
A better approach: The useful path
The idea behind this sort of automated UI test design is to analyze both the application and the needs of the users of the application, and to create automated tests that exercise paths through the application that would have some sort of value to a user. Software applications exist to automate some aspect of business function for the users of the software. Automated UI tests should cover those paths that are valuable to users.
When taking a "useful path" approach to UI test automation, though, it is critical that the designed paths for the automation are as unique as they can be made. If one test touches elements A, B, C, and D and another test touches elements W, X, Y, and Z, then a failure of any element on the page will cause only one failing test in the suite. If one test touches elements A, B, C, and D, and another test touches elements W, X, C, and Z, then if element C fails, it will still be fairly easy to diagnose the failure. But if one test touches elements A, B, C, and D, and another test touches elements A, B, C, D, and E, then any failure of A, B, C, or D means that analyzing the nature of the failure of both tests is going to be more expensive, since they both touch so many of the same elements.
The well-designed paths end up being a Web-like structure, with paths through the application crossing other paths only rarely. Each element in each page is exercised as few times as possible. The poorly designed paths end up being a tree-like structure, where many different tests exercise the same UI elements repeatedly in every test before branching off to cover other features.
Implications for user experience
I have designed several UI automation frameworks and I have learned these patterns and anti-patterns in the course of designing test suites with tens of thousands of test steps. After a time, I began to notice an unusual thing.
Again, UI testing is expensive, so it pays to design efficient paths through the application. If in designing such paths for automated UI tests I find myself skipping around the application a lot, having to backtrack to previous states, or being forced into tree-like test designs, I begin to look for user experience (UX) problems in the application itself.
For example, in one application I found myself designing tests to edit the same data on two different pages in the application. Over time, the semantics underlying the "view" page and the "edit" page had become garbled, so users were being forced to navigate between two different pages in order to accomplish certain kinds of updates to the same data.
In another instance, I found myself creating tests that had to change the state of a set of data in an approval process that would move the data sets forward and backward in the business process itself. After struggling with the test automation, it became clear to me that some sort of tagging or other metadata attached to the records would be a far more efficient approach to representing this particular business process.
In another instance, I was forced into a tree-like UI test design by the design of the application itself, and when the architecture of the underlying application underwent a radical change, it was very difficult to sort the actual defects caused by the code changes from the faults caused by the poor test design.
Elements of good UI test design
Before beginning to design the UI tests themselves, make sure you have a reasonable abstraction in place to access the pages themselves, like a Page Object model or GUI Maps.
In designing the tests themselves, take care to exercise each page element only to the extent necessary to validate the useful path itself. Exercising individual elements many times is inefficient and expensive, and yields a tree-like design for the whole test suite that is fragile and difficult to maintain.
With these things in place, your UI test design activity will begin to inform your UX testing work.
Dig Deeper on Automated Software Testing