Guiding principles for an automated software test

An expert in automated software testing offers guiding principles for succeeding at this most challenging of QA projects.

In recent conversations with software quality assurance pros, I keep hearing the same thing: Test automation projects...

are among the most demanding that any QA organization takes on. Following the right set of guiding principles can increase the odds of success with this challenging undertaking.

I asked Jay Philips, president and CEO of software development consultancy Project Realms, in East Bethel, Minn., for her advice for QA organizations embarking on automated software testing projects. She offered five guiding principles, which I share with you in this edition of Quality Time.

Guiding principle #1: Avoid developing test scripts too early.

Software testers are encouraged to plan an automated test early in the application lifecycle. But writing scripts that test an application function before that function is complete is counterproductive, Philips said. "If you automate too early and the app is still changing, you will have to rewrite your script." She doesn't recommend waiting until the entire application is ready. A better approach is to review the application requirements, identify which ones are complete and start writing test scripts for those that are.

Guiding principle #2: Develop realistic estimates of how long application testing will take.

Software testers are under pressure to do more, faster. But if they succumb completely to management demands to get the software out the door sooner, they may end up damaging the credibility of the test organization, Philips said. Test automation is all about speed, but it's crucial to budget time to resolve unanticipated problems, such as scripts that aren't working. These kinds of issues come up all the time, she said.

Manual testing -- conducted in tandem with automated testing -- remains important.

Guiding principle #3: Stay abreast of subtle design changes, which test scripts won't necessarily catch.

QA pros -- and the scripts they write -- focus on testing new functions implemented in the software, but they often overlook design changes that might accompany those new functions. Philips offered an example. The second release of an application gives users a more efficient way to change their passwords. In the first release, the background color of the change password screen was red; in the new release it's blue. A script that is focused on testing the functionality is not going to check to make sure the red screen is now blue, Philips said.

Guiding principle #4: Turn to developers for help coding scripts.

One hurdle that most software testers face is learning a programming language well enough to write the scripts that an automated software test project demands. This is the perfect opportunity to work with developers on your team, Philips said. And, yes, like QA pros everywhere, she has heard countless stories where interactions between software testers and developers don't go well. But she urges testers to look beyond the stereotypes and ask for help. It worked for her.

Once, while working with a home-grown testing framework that required some knowledge of Java, Philips turned to a developer for insight on how to code a particular script. She knows some Java. "But there were areas I could not figure out," Philips told me. She sat down with the developer who wrote the piece of code that was giving her trouble, and together they walked through the application. "He helped me figure out why my script was broken," she said. What's more, they built a relationship that continues today. "Now he uses my script to do unit testing," she said.

Guiding principle #5: Keep on educating management about test automation.

Myths about automated software testing abound. The big one, of course, is that organizations that implement it can lay off the testers. Getting top management to understand why this is not the case is an ongoing challenge. "They want to get rid of manual testers, but you can't," Philips said. "Management people believe you can automate everything. But it's not true."

Instead of fearing for their jobs, QA pros should take it upon themselves to continually educate top managers, without expecting them to get the message right away. Explain why manual testing -- conducted in tandem with automated testing -- remains important. Offer evidence of where the QA organization is saving time and money. Most important, give management examples of software glitches that have been caught by manual testing and explain why automated testing can't catch them.

What are your guiding principles for automated software testing projects? Let us know what you think. 

This was last published in September 2013

Dig Deeper on Automated Software Testing



Find more PRO+ content and other member only offers, here.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Join the conversation


Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What has been your greatest challenge in automating software testing?
The complications involved in automation will uncover only once it is started , we may face issues in handling many of the control types in the application.
Estimation is already a big challenge and then complexity of coding adds more challenge into keep estimation intact.
given little resources to do it though everyone seems to want it!
Selecting the best tool for automation test
Not when to automate but how to automate. Designing an automation solution that will fit with source control and our work flow.
Managers are too “stupid” to realise its benefits. Changing the status quo is like the world coming to an end for them
The greatest challenge for me recently has been dealing with latency and "flaky tests". It's frustrating when a test run fails as a whole suite, but passes when the failures are un individually. Hunting for the dependency or state condition that may be causing the issue can be a waste of time, simply because running the same order of tests a second time results in a pass. Running tests in parallel helps, but even then there's enough variation to drive one to distraction. 
The greatest challenge for Automation are two things: 

1. Moving Target Syndrome
2. Poor planning on Dev team to provide hooks to easily setup for Automated Tests

For 1: many automation projects necessarily lag behind development, and require constant maintenance and upgrading.  Michael alluded to the problem of flaky tests.  Flaky tests are part of the equation, but there are always gaps in a programs development that automation sometimes has to work around. 

For 2: Over reliance on UI for set up and tear down of tests makes testing not just slower, but also, less reliable.  I've seen better success on teams where APIs are available for setting up data (and I think that's preferable to tests that write directly to the database, but sometimes you don't have a choice)
Selecting the right automation tool depending on the requirements.
Requirement to automate the database testing for processes like schema verification, verifying structural changes, data updation, etc.

But now I can say that these challenge is no more a big challenge.

The expectations made up by people that only read about the benefits of test automation but do not know anything about test and test automation in general.

It's hard to decide when start and automate tests scripts
coding scripts is not easy.
Coding Scripts so it entails QA's skills
Well put problems of QA teams
I like to add another Guiding principle: Make the reuse of your test automation artifacts an important aspect of your consideration, share your creation with yourself (in futre use) and with others.
Testing business scenarios/Agile stories/use cases is probably the most challenging. Even with Agile, QA still doesn't get a real sense for how customers use the software we test. When you add the dimension of dynamic data changes in multiple required systems in your test environment, automation becomes quite arduous.
I think guiding principal number 7 should be "treat your test automation work with the same level of value and care that you do your course code development. If the same level of care, revision control and code review goes into automation as goes into source code creation, then there isa very good chance that test automation will be successful (much more so than if this approach is not used).
Number five in this list is possibly the most important one. Automation can be an effective measure for working on regressions and discovering if a commit has broken something, or if a fundamental change has been introduced that requires rework of other scripts, but it cannot test for things it doesn't already know about. That's where humans have to continually inform and help update the suite with new findings and new information.