Is RPA the future of test automation? Test automation maintenance requires knowledge sharing
Tip

How to perform test automation maintenance

Automated tests run on their own, but they need a little help to stay in shape. Proactively address test automation maintenance with initiatives in these two key areas.

There's a popular but misguided notion in software quality about decision rules, functions that map observations to actions. The myth goes like this: After you make a decision rule, give it expected behavior and results, and then put it in code, that test is fully paid for; the test costs nothing to run going forward.

Sadly, that isn't true.

Even the most automated set-it-and-forget-it tests generate maintenance costs, in a couple of ways. But whose responsibility is it to fix up tests and the tools to run them? Let's look at test automation maintenance approaches for organizations to deal with this ongoing challenge.

Types of test maintenance

There are two kinds of test automation maintenance processes: those for when working tests start to fail, and those for tooling adjustments.  

As programmers update the software over time, automated tests will fail, simply because the code under test changed. When a test fails, someone must determine whether it was caused by a change in behavior, an unexpected side effect, or a real software defect. Then the team knows whether to fix the code or update the test -- or maybe both.

In addition to the effort required to fix failing tests, there is usually an ongoing support cost for automated test infrastructure. A build master typically takes on this responsibility. He must set up the CI server, maintain it and update all the open source packages that comprise the test system. If automated testing tools run into production -- for example, to create a login and simulated search then measure response times -- those tools also need support. When the organization changes APIs or reporting tools, the build master might need to rip out, replace or reintegrate some elements of the test tool so that it continues to function as part of a cohesive tool set. The build master must also upgrade the test tool infrastructure to support new packages or device OSes as needed. In some cases, the cost to add and train new developers on automated testing tools falls under this category of maintenance.

How to support test automation

In the 2000s, I worked with organizations to create test automation teams. Almost by definition, these teams were hopelessly behind. Every day, errors would pile up, caused by changes that programmers introduced the day or week before -- and most of them were false errors.

As testing consultants James Bach and Michael Bolton, of Satisfice and DevelopSense, respectively, pointed out, when tests are automated, they lose the investigative factor of human thought and simply become checks. Thus, the more automated tests there are to check specific things, the more overwhelmed test automation teams can become as they attempt to keep the scripts and tooling up to date. These external teams shield developers from the problem of test script maintenance.

Agile and Scrum take test work away from external teams seen in Waterfall and other siloed methodologies and give it to the cross-functional development team, with a new Agile tester role added as an embedded team member. This shift had two advantages. Agile testers enable dev teams to experience the full software development feedback cycle -- build, test, fix, retest -- without waiting for input from an external team. Second, this approach shifted the responsibility of test automation maintenance back onto the development team, because each check that failed became their responsibility.

Those two benefits of Agile testing ultimately result in a tighter feedback cycle than found in separated development and test teams. Because the Agile tester is generally in the codebase, aware of the current stories and bug fixes, they know what needs an update. They can ask programmers about why they made a change that's affected the test. Thus, the information from tests, including failures, is immediately valuable in Agile, and thus easy to act on. High-functioning Agile teams often act on test run information the same day they receive it, without the need to add it to a queue or create and triage a ticket.

Where programmers can help

While Agile testers typically handle customer-facing tests, developers have QA responsibilities of their own. Programmers generally run relevant unit tests before they push code to the master branch. Those tests should pass, but, if they don't, call upon the programmer who introduced the change to resolve any apparent failures. Most scheduled test runner tools provide blame information, which helps identify which developer broke the unit test. This setup also provides a second benefit, which you can think of as measuring-after-the-fact: The programmer can have confidence the change was correct, because the test written pre-change failed.

Better yet, the programmers can resolve failures themselves, ideally before they check in code. While it can be a hard sell to convince a team that the user story isn't done until all of the acceptance tests run, it's worth the cultural overhaul. This modus operandi puts the cost of failure on the person who introduced it, provides that measuring-after-the-fact benefit and enables the build to run clean. Additionally, this approach inserts the test result information immediately after a specific change, when it will have the most value.

Development teams that can't commit to those process changes could instead create a rotating role for test automation maintenance. Each sprint, a different team member maintains the automation, reports failures and either fixes up the test themselves or assists other programmers with the task. Over time, all programmers learn to update and run the test system.

Don't forget about test tool infrastructure. The rotating test automation maintenance person can also help with infrastructure support work, whether that task is to upgrade servers to the current version of the tools, or configure a test tool to work with the newest phone simulators. The product owner might define these ongoing maintenance upgrades as features that matter to the customer as well. This approach will give test maintenance higher priority within the customer-focused Agile team as, essentially, user stories.

It pays to have a plan

The longer the time between test failure and remediation, the more the cost goes up and the value gleaned from that failure goes down. Development teams, ultimately, must flesh out a plan for who should do certain work around test automation maintenance. Otherwise, the work might fall on someone unqualified, or overburden a conscientious worker. A deliberate approach to maintenance is the best way to keep it from contributing to burnout and turnover.

If you lose the one person on the team who understands and supports both the testing tool software and the infrastructure to make it useful, you'll find yourself up the river without a paddle. Keep up with the maintenance demands of automated test tools -- or pay the price later.

Next Steps

4 ways to minimize test automation maintenance

How to plot out a test automation strategy

How to build a test automation framework

Dig Deeper on Software testing tools and techniques

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close