Compare the top automated functional testing tools How autonomous software testing could change QA
Tip

Automate tests with attention to UI, rapid feedback and more

If your app-dev group just can't seem to get automated tests right, you're not alone. Here's how to implement an effective test automation strategy, one piece at a time.

When organizations attempt to automate software tests, they typically find a suitable tool and create regression tests. This approach returns strong initial results, as the team can run all their routines in minutes.

The real challenge comes later, when app dev teams try to figure out how to get multiple people to update tests, how to support different browsers and operating systems, as well as how to configure code reuse and extensibility. These frustrations can cause an organization to ditch the tool they selected to automate tests, and start the whole doomed cycle anew.

These organizations lack the technical components that make up a successful test automation strategy. You don't need to all the pieces covered here to start to automate tests; however, successful teams generally cover all these bases. It's alright to get things in motion with just a few of these suggestions, but don't stop there. Over time, consciously iterate toward this full scope of automated testing practices for web- or mobile-based software.

Build/deploy environment

Many teams still deploy to a single server when they automate tests. Every time programmers finish a feature, they push it to test on that server, which can cause disruption and delays. Even worse, the work might need a fresh database, which requires a database load after every test run.

Scriptable environments solve that problem, as they can be created with a command-line interface (CLI) call. When provided with a branch or commit, the build software creates a new web server with a name like testsystem02.companyname.com. In many cases, this is the best way to automate tests; it removes system downtime and delay, and testers can work on different versions of features at the same time. One team I worked with spent 90% of its time just setting up test data -- the capability to simply import users from a text file via CLI call could reduce that time.

Object recognition layer

After the team automates server setup, it can automate inspection, which involves its own individual components.

For code to reach GUI objects in an organization's application, there is typically a locator, which might be the text of a link or the location of the object on the screen, such as "second table-row after the third division." Testers try to make the location a predictable and named ID. The object recognition layer takes these descriptors and assigns a variable name to them. For example, testers want to reach a field that follows Lastname as a link to an employee ID. If that field is unpredictable, make a workaround like this:

 //input[@id='Lastname']/following-sibling::a[1]

The actual placement of that field might change over time, which means we need to change that snippet of code. Instead of sprinkling "//input[@id='Lastname']/following-sibling::a[1]" throughout the codebase, we can use a variable $employeeLink instead. Then, when the tests all fail because the employee link is second after Lastname, we can make a change in just once place, the object recognition layer.

Different tools implement this kind of standardization in different ways. Some tools are visual, so the user selects the object, puts it into the recognition layer, and names it. Then, when the test breaks because the code changed, they can rerecord that one object into the layer, and all the tests that use that layer will work.

While it's easy to get started in test tooling without an object layer, it is much harder to maintain code without it.

Business logic or page-object layer

After you identify the objects, next identify the major business flows that run repeatedly. For an organization, login and search are constant flows, while register and upload_image are flows that depend on a condition. Store these latter logical operations in another layer. Then, if register() suddenly requires a middle name, you can make that change in one place -- just create a new, optional variable at the end with default otherwise. That change will enable tests to pass again.

This layer provides the building blocks for the actual tests that come next. I recommend that the near-English tests have no "for," "while" or "do" loops. Instead, push those down into a page objects or business logic layer.

Near-English tests

Once the business logic layer is in place, a test might look like this:

login(); #Assumes default, working userid and password

Assert(text_present($login_welcome));

Assert(text_present($header));

Assert(text_present($footer));

Assert(text_not_present($unlogged_welcome));

logout();

Assert(text_present($unlogged_welcome));

 

login($default_user,"wrongpassword");

Assert(text_present($login_error));

Assert(text_present($reset_password_link);

Assert(text_present($unlogged_welcome));

Assert(text_not_present($login_welcome));

A non-tester might be able to read this example, though behavior-driven test tools like Cucumber make the tests even more accessible to business staff, like this:

Given I am logged out

and I log in with username mheusser and password correctpa$$word

then I see "hello, mheusser"

and I see the logged_in_header

and I see the logged_in_footer

 

Given I am logged out

and I log in with username mheusser and password wrongpa$$word

then I do not see "hello, mheusser"

and I do not see the logged_in_header

and I do not see the logged_in_footer

and I see "There were errors with your request:"

and I see "The password provided is not correct."

With the right tools and readable tests, you can create a continuous, whole-team exercise in which customers participate in the creation of high-level business rules.

Test runner, results

The tool itself is the runner, which takes a given set of near-English tests and runs them in a given test environment. That environment is probably the base webpage, such as testserver01.companyname.com. A good runner also supports different platforms, browsers and mobile devices, from the CLI.

You should be able to kick off the runner from the CLI; it's too cumbersome to require a human to run an application and click a button all the time.

The test result output also matters. As part of a long-term test automation strategy, most companies make some sort of publishing model where interested parties can see the results of the last few builds with red/yellow/green status indicators. Consider using the continuous integration (CI) system to publish these results after tests finish.

Incorporate CI

Anyone can create an environment from the CLI -- including a CI system. If tests run from the CLI, the CI system can run them. If results come back in an easily interpretable text form, then you can run some or all of the tests after every build, which tightens the developers' CI feedback loop.

In some cases, you can find bugs in minutes after injection, instead of waiting a week for a Scrum team or potentially weeks or months for a Waterfall team. If the team wants to automate tests through CI, they need to run quickly.

Version control and change process

Once the tests all pass and lock in, they essentially become a method of change detection. As programmers change the codebase, tests will break. If a team stores tests in the version control system, right next to the code, then programmers can run just the relevant tests before they commit the software, which makes sure the tests continue to run green.

Code quality, risk assessments

GUI-facing test automation is slow and brittle. If GUI code is buggy, focus on improving code quality before the software gets to GUI regression testing. Teach developers to write clean code, and run good unit- and API-level tests to catch problems early.

Keep in mind that test tools only look for what you specify. Generally, traditional GUI test automation is effective when it runs through exact scenarios, but not so helpful at recognizing that a page simply looks wrong. Most test writers fill in fields, click buttons and verify that totals at the bottom of a page have the right output. Make sure these people also check for user experience factors, for example, that buttons don't overlap and the font size is correct.

Next Steps

How to build a test automation framework

Learn the value of exploratory testing vs. scripted testing

Find the right automation test cases

Dig Deeper on Software testing tools and techniques

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close