This content is part of the Essential Guide: What you need to know about software testing automation
Get started Bring yourself up to speed with our introductory content.

Can we adopt a fully automated software testing strategy?

Your boss has jumped on the bandwagon to automate software testing. Don't despair. Let expert Matt Heusser walk you through what to say -- and do -- to keep everyone happy.

Well, first of all, can we admit that your boss hasn't thought this through completely?

What does it mean to have a fully automated software testing strategy?

Let's say you're doing a demo for a customer or senior executive. The software isn't in production yet; you are showing what you've done to get feedback for the next iteration. The vice president of finance asks what happens if you try to create an invoice that is past due the day you create it. It's a good question, essentially a test idea; the kind of thing no one thought of before. If the software works one way, it's fine; if not, it's a new feature request -- not really a bug. The person at the keyboard starts to try to answer the question.

Do you tell him to stop, that you need to create an automated test to answer that question? I certainly hope not.

There are plenty of test ideas like this -- things you think of in the moment to explore, especially when testing a new feature that is part of an existing system. Many of them aren't worth doing every time; you just want to try it once. Institutionalizing these into code, to run all the time, is an expensive and wasteful process. Your boss certainly doesn't mean that every little idea needs to be automated, does he?

Likewise, does your boss want to automate test design -- the development of test ideas? Does he have some sort of magical box that you can feed in requirements as word documents and pop out test conditions and expected results?

When most people say "test automation," they mean automated test execution and evaluation, and, perhaps, setup. That is, they want to be able to click a button, have all the pre-existing checks run, and get results. A 100% automated software testing strategy, I think, implies that the thumbs up should be sufficient to get to production without further research.

If it were me, I'd start with asking these questions to have your manager define what a 100% fully automated software testing strategy means.

But there's a more serious problem under the surface here. Test tooling is a means; it buys you something. It is not an end in itself. If it were me, I'd be asking what the final goal is. If the goal is to go from "finished new feature test" to "in production" in some small period of time (say an hour) you'll likely find there are other blockers; for example, the time to build and deploy already takes over an hour. Or the test environment may not be able to support multiple builds and deploys on multiple branches.

I suspect the boss means regression testing -- the period from "finished new feature test" to "ready to deploy." I'm not sure how long it is taking you now, but if it's over an hour, then you might suggest an intermediate goal on the way to the long-term goal, such as cutting the effort in half. If you do that, suddenly a host of new ideas open up, including finding engineering ways to reduce the failure rate, so fewer tests are needed for regression.

Overall my advice is simple: Take a step back. Breathe. Ask reasonable questions. Don't be a know-it-all, don't be a doormat, don't enable and don't (overly) obstruct. Work with the boss to define terms, to focus on end results, and then come up with the means. All of a sudden, you'll be a leader, and other people will start to notice.

Next Steps

Automating? Follow Lisa Crispin's advice on getting started

The best tools for the job

Manual versus not -- John Scarpino weighs in on when you should automate

Dig Deeper on Topics Archive

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

How have you handled a push for a fully automated software testing strategy?
A good walkthrough by Matt Heusser.

To make it more specific I suggest to pick up points from the article "Coming to T.E.R.M.S. with Test Automation" by Albert Gareev and Michael Larsen.

Where T.E.R.M.S. stand for the following strategic points:
  • Tool vs. Technology
  • Execution
  • Requirements
  • Maintenance
  • Security
The best was is to educate people about why 100% test automation is not realistic or possible. I remember several years ago when we are all given the annual goal to automate 100% of our testing. My team’s approach was to work in the phrase “of what can and should be automated.” We then performed an analysis of our software to determine what can and should be automated, talked with the CIO to explain why not everything can or should be automated, and presented our strategy.
I have found many managers have a mostly irrational fear of regression, an this is a prim reason why automation is pushed so hard. Some even go so far as to believe they need it before every release.

Automated or not, I think that can be waste.  I prefer better informed teams, better known risks, and then decide what is important to test.

However, a bigger issue, is quite possibly that too much emphasis is put on feature by feature testing, and automation can quickly become a glut of over tested software via automated checks.
I like how Matt suggests first of all asking what that means.
For all we know, the boss might be playing Orange Juice Test* with us.

* "Orange Juice Test" - an expertise evaluation technique, from "Secrets of Consulting" by Gerald Weinberg.
Matt really gets to the bottom of it here.  

Too many think automation is like a garnish.  Sprinkle it on like salt or A-1 sauce, but many miss what he points out here that sometimes they just want to go a little faster and investments in the build pipeline, and infrastructure for spinning up environments and deploying.

Only a small part of this might involve automated 'testing', and yet how much time and effort could be spared.
I’m a huge proponent of test automation, but I think it has to be done intelligently, and it does have limits. It boils down to what a person and a machine can do. There are always going to be things that you can’t get a machine to do (or do well), so you’re always going to have to get a set of eyes on your software.