Q
Manage Learn to apply best practices and optimize your operations.

Can we fully automate our software testing?

Your boss has jumped on the bandwagon to automate software testing. Don't despair. Software testing expert Matt Heusser walks through what to say -- and do -- to keep everyone happy.

Most of us are familiar with the term automated software testing. It sounds great, but clearly, it's not as simple as pressing a button.

What does it really mean to adopt an automated software testing strategy? If manual software testing is removed from the equation, the QA team faces dramatic changes -- or, perhaps, extinction. But fully automated QA doesn't guarantee quality; it might even serve as a detriment.

So, is it possible to fully automate software testing? And, if so, is it a good idea? Let's explore that question with a discussion of the value of test automation.

How automated testing creates value

Most test automation runs an application through an algorithm, with a start place, a change and an expected result. The first time those tests pass, the automation is complete, but the feature is not done until the checks run. So, the automated tests haven't delivered value yet. The test-fix-retest loop helps complete tests faster and provides clear instruction to guide correct behavior. However, when the automated checks are created, they don't actually find problems. Once everything passes the initial tests, changes occur so the software can detect breakage when the expected result is not correct.

At this point, we run into a maxim of test tooling: After a single change in the software, test automation essentially becomes change detection.

Bear in mind: The programmer's job is to create change, and that, in turn, creates a maintenance burden. When someone must debug a test, confirm that the software actually changed, and then update the test -- say, to add the now-required phone number field when creating a user.

With this basic understanding, you see how automated testing can benefit your organization. Then, the question becomes: How much should you try to automate software testing?

Determine how much automation is enough

Let's say you're doing a software demo for a customer or senior executive. The product isn't in production yet; you're just showing what you've done to get feedback for the next iteration. The vice president of finance asks what happens if you create an invoice that is past due the day you create it. It's a good question and, essentially, a test idea -- the kind of thing no one thought of before. If the software works one way, it's fine; if not, it's a new feature request, not really a bug.

The person at the keyboard tries to answer the question. Do you tell him to stop -- that you need to create and run an automated test before you can answer that question? I certainly hope not.

There are plenty of test ideas like this, things you think of in the moment to explore, especially when testing a new feature that is part of an existing system. Most ideas you want to try just once. Automating these tests into code that runs all the time is wasteful and expensive. Your boss certainly doesn't want every little idea institutionalized. Moreover, does your boss want to automate test design -- the development of test ideas?

There is no magical box into which you can feed requirements as Word documents and pop out test conditions and expected results. When most people say test automation, they typically mean automated test execution and evaluation, plus perhaps setup. They want to click a button, run all the preexisting checks and get results. A fully automated software testing strategy implies that a thumbs-up is sufficient to move to production without further research and analysis.

In reality, that is 100% regression test automation -- you exclude performance, security, and new platform or browser support and just say, "Once any change has been tested in isolation, it can roll to production after the tooling passes." A few of the companies I have worked with have achieved this standard.

Moreover, it still leaves us with the test maintenance problem.

3 ways to perform test maintenance

There are two popular approaches to make test maintenance more efficient. The first is to write thinly sliced features that run quickly and are easy to debug, sometimes called DOM-to-database tests. Another approach is to isolate the code into components that deploy separately and simply do not have GUI automated checks, focusing automation efforts "under the hood."

A third, newer approach to maintenance is to use machine learning and predictive intelligence to figure out if the software is actually broken and then self-heal. Sometimes, a UI change doesn't impact features at all -- it only changes the elements' position on the screen, causing the locators to fail. In this case, the software can use a history of where elements are stored to essentially guess about the location of the submit button and recheck. If the software passes under these conditions, the AI can adjust the check to self-heal. Some companies have tried this approach with moderate success, reducing the test maintenance burden without increasing their false-pass rates.

Overall, my advice to organizations that question the viability of a 100% automation policy is simple: Take a step back, and breathe. Ask reasonable questions. Don't be a know-it-all, don't be a doormat, don't enable and don't overly obstruct. Work with the boss to define terms, focus on end results and come up with the means to achieve those results -- whatever level of automated or manual testing it requires.

Next Steps

Devising a test automation strategy: Getting started

Seven ways to know when to automate testing

Dig Deeper on Automated and autonomous testing

Join the conversation

7 comments

Send me notifications when other members comment.

Please create a username to comment.

How have you handled a push for a fully automated software testing strategy?
Cancel
A good walkthrough by Matt Heusser.

To make it more specific I suggest to pick up points from the article "Coming to T.E.R.M.S. with Test Automation" by Albert Gareev and Michael Larsen.

Where T.E.R.M.S. stand for the following strategic points:
  • Tool vs. Technology
  • Execution
  • Requirements
  • Maintenance
  • Security
Cancel
The best was is to educate people about why 100% test automation is not realistic or possible. I remember several years ago when we are all given the annual goal to automate 100% of our testing. My team’s approach was to work in the phrase “of what can and should be automated.” We then performed an analysis of our software to determine what can and should be automated, talked with the CIO to explain why not everything can or should be automated, and presented our strategy.
Cancel
I have found many managers have a mostly irrational fear of regression, an this is a prim reason why automation is pushed so hard. Some even go so far as to believe they need it before every release.

Automated or not, I think that can be waste.  I prefer better informed teams, better known risks, and then decide what is important to test.

However, a bigger issue, is quite possibly that too much emphasis is put on feature by feature testing, and automation can quickly become a glut of over tested software via automated checks.
Cancel
I like how Matt suggests first of all asking what that means.
For all we know, the boss might be playing Orange Juice Test* with us.

* "Orange Juice Test" - an expertise evaluation technique, from "Secrets of Consulting" by Gerald Weinberg.
Cancel
Matt really gets to the bottom of it here.  

Too many think automation is like a garnish.  Sprinkle it on like salt or A-1 sauce, but many miss what he points out here that sometimes they just want to go a little faster and investments in the build pipeline, and infrastructure for spinning up environments and deploying.

Only a small part of this might involve automated 'testing', and yet how much time and effort could be spared.
Cancel
I’m a huge proponent of test automation, but I think it has to be done intelligently, and it does have limits. It boils down to what a person and a machine can do. There are always going to be things that you can’t get a machine to do (or do well), so you’re always going to have to get a set of eyes on your software.
Cancel

-ADS BY GOOGLE

SearchCloudComputing

SearchAppArchitecture

SearchITOperations

TheServerSide.com

SearchAWS

Close