Tip

Test automation: When, how and how much

Test automation has always been an attractive alternative to expensive, time consuming and inconsistent manual testing. The most common questions when considering a test automation program are:

  • When to invest in test automation?
  • How to implement or proceed with a test automation program?
  • How much of your manual testing effort should shift to test automation?

We will address these questions in the context of the most common overall program factors. Using the information and considerations described, you’ll have the data to help you best determine

    Requires Free Membership to View

how to implement test automation in your organization.

Key program factors

The questions of when, how, and how much are always dependent on the context of the overall program -- What is the test automation target? Key program factors include: the development paradigm, the quality objectives, and your deployment velocity. When, how and how much test automation to apply against a program is dependent on these factors -- the return on investment must align with these factors; otherwise, the long-term success of the test automation effort will be in jeopardy and almost certainly fail. The most common key program factors are:

  • Development paradigm (Agile, non-Agile, instrumented, non-instrumented)
  • Quality objectives (defect escape velocity)
  • Target deployment velocity (volume of new/enhanced functionality per release)

The focus of this article is on the implementation of a test automation program. It must be noted that test automation is not a silver bullet -- treating it as a “cure all” will certainly lead to disappointment. Test automation is an enabler, so if you have an effective (though overburdened) testing process then an appropriate test automation program will yield significant returns for your organization.

When to automate

A test automation program should be considered when key program factors indicate the overall development program is not meeting expectations and there are no cost-effective alternatives to test automation. The key indicators from a program objective would be:

  • quality objectives are not being met
  • defect escape velocity into production is deemed unacceptable
  • target deployment velocity is not being met because testing is perceived as a bottleneck
  • testing is not being completed within the assigned timeframe

These factors are often experienced by the project team at the same time -- basically the test group has reached its testing capacity. When this happens, the leadership team has two alternatives: grow the manual testing team or launch a test automation proof-of-concept (POC) to determine if automation can address the testing deficit.

It should be noted that testing is often not the “true” bottleneck, but as long as it is perceived as the bottleneck other systemic quality issues will not be addressed. Test automation is one way to remove the perception of a testing bottleneck empowering the testing team while allowing the program to focus on other systemic quality issues -- for example the initial quality of the code.

How to implement a test automation program

How to implement a test automation program breaks down into people, process and technology. From a people perspective, the testing group will need both training and internal expertise (consultant or new hire) in the art of test automation. Your people will need:

  • training on the test automation paradigm and tool being used
  • internal expertise (consultant or new hire) to ensure the appropriate application of automation

From a process perspective, the testing group and the program as a whole will need to determine how test automation will fit into the system development process -- the initial focus should be on high-yield automation activities that free-up manual testers (full-time equivalents). Once a sufficient number of resources are available, the focus can turn to high-risk areas of the application space that do not meet the overall quality objectives. Your process will need to:

  • integrate test automation into the overall development process
  • focus on high-yield automation activities
    • test data creation
    • smoke test
    • regression test

How and what test automation technology to apply is dependent on other key program factors, specifically the development paradigm being deployed and the target deployment velocity. If the development paradigm includes effective code instrumentation (self-testing code) or effective test-driven development (TDD), then a lighter technology that addresses the testing gaps from an outside-in perspective would probably be an appropriate fit. If the development approach does not include sufficient quality checks, then a heavier (tier 1 commercial) toolset would be a more appropriate fit -- you will need both the organizational and outside-in automation (GUI based). In either case, if the desired deployment velocity is “fast enough,” a tier 1 commercial toolset will be required -- simply to avoid the framework investment required to enable freeware, open source, or tier 2 toolsets to reach the desired testing velocity. Your technology will need to:

  • address the quality gaps inherent in the development paradigm
    • light-weight test automation for self-testing development paradigms
    • heavy-weight test automation for non-self-testing development paradigms
  • address the deployment velocity challenge
    • for high deployment velocities heavy-weight test automation toolsets
    • for lower deployment velocities light-weight test automation toolset may fit

Any test automation program should start with an honest assessment of the people, process and available technologies. Based on this assessment, and if justified, based on the responses to an RFP (request for proposal), one or more POC (proof-of-concept) initiatives should be launched to determine which process and technology best fits your short-term and long-term needs. Your team will need to:

  • Assess the people, process, technology and success of the current testing process
  • Identify systemic root-cause issues preventing testing success
  • Asses available processes and technologies (RFP)
  • Launch one or more POC initiatives to verify appropriate “fit”

How much to automate

How much of your application space should be automated is dependent on the key program factors we have previously discussed. There are some basic guidelines that should always be considered:

  • Never automate the entire testing process -- skilled exploratory testing will always yield defects that will not be detected through test automation.
  • Test automation is a tool not a solution -- if it is not applied with intelligence and skill, the net result will be “really fast” ineffective testing.
  • Test automation should always yield a return-on-investment of at least 4 to 1 -- based on the consumption of testing resources (full-time equivalents).
    • The weight of on-going maintenance will decrease the return on investment over time unless the application becomes “static.”

Moving beyond these basic guidelines, how much of the application space to automate is dependent on which key program factors are important to your organization. If the goal is to significantly increase deployment velocity or ensure quality objectives are being met -- then a significant percentage of existing functionality should be automated. On the other hand, if the objective is to reduce the weight of the overall testing effort on the test group, the focus should be on high-yield automation activities:

  • test data creation
  • smoke test
  • regression test

In closing, test automation is not a silver-bullet -- it is a powerful test enabler when used appropriately. Test automation is not “free.” There is a significant cultural and monetary investment that needs to be made to ensure your test automation program is successful -- focus on those areas of the application that will yield the highest and most visible return.

David W. Johnson “DJ,” is a Senior Test Architect with over 25 years of experience in Information Technology across several business verticals, and has played key roles in business analysis, software design, software development, testing, disaster recovery and post implementation support. Over the past 20 years, he has developed specific expertise in testing and leading QA/Test team transformations -- Delivered Test: Architectures, Strategies, Plans, Management, Functional Automation, Performance Automation, Mentoring Programs, and Organizational Assessments.

 

This was first published in January 2011

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.