Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Weigh test automation's advantages and disadvantages

Automated testing can add speed and completeness to the software development process, but be sure you've considered the tradeoffs. Let's look at the pros and cons.

Testing has always been a difficult, expensive and frustrating part of software development. In this age of rapid development and multicomponent software, it's often necessary to run through an exhaustive application test just to validate a single component change. The challenges of functional testing versus load testing are exacerbated by the cloud, which expects software to repair itself and scale under load.

Automated testing seems the perfect answer, but there are tradeoffs for both functional and load testing to consider. Be sure you understand test automation's advantages and disadvantages before you commit.

How automated testing differs from manual testing

With automated testing, tests can be organized to drive execution down all of the possible logic paths, achieving a level of completeness in testing often unattainable by even the developers themselves. Test injectors can be replicated and coordinated to simulate almost any number of users and any pattern of usage change. Software testing that might have taken months can now be done in a day.

The biggest problem with automated testing is that the tests themselves must be developed, almost as software is. Plus, they must be synchronized with the intended software usage patterns and application changes.

Any change made to software logic requires testing changes to exercise both the new logic and the way it interacts with the rest of the application. If this test-change task isn't done, or if isn't done correctly, then the tests won't validate the development. This synchronicity is both a management issue and a function of the tools.

If developers direct the testing, it may be helpful to inject test points in the code during development so code analysis can help generate tests. If it's to be driven by individual business departments, the test tools must be simple; otherwise, users will make mistakes when they set up the tests.

The hybrid cloud architecture creates a problem for all kinds of automated testing because it tends to segment the application into a web/cloud front end and a transaction-processing back end.

The most popular automated testing tool, Selenium, is a front-end testing tool, so obviously users accept the idea to separately test front-end and back-end components. This makes automated testing easier, but it also means that teams must synchronize the front- and back-end tests. If not, there's a risk that the entire application won't be tested. Many companies focus most of their development on the cloud front end, so cloud computing may ultimately promote the growth of automated testing.

Be aware of the tradeoffs

As for specific test types, let's look at load testing, which is sometimes called performance testing. This is probably the strongest application of automated testing because truly effective load testing requires too much human interaction and effort to do manually. And that means the testing is almost never done right.

Testing is about being systematic, whether it's done manually or through test automation.

Automated load testing can include many test data injection points, all operating independently as real users would, so it can simulate a large and variable population of users. Automated load testing with tools such as Apache JMeter or SmartBear's LoadNinja can provide thousands of users and controlled rates of data injection, to test all manner of load scenarios.

The downside here is that the load test must be properly defined. The transaction mix and distribution, the rate of data injection and the data fields in the transactions themselves all must be reasonable so that the test data mimics real user input.

If tests aren't realistic, the results won't be useful. A good example is an automated test that generates random field values. This may create more errors than real-world users would, which distorts a test. This is the number one problem users report, and it's the main reason they are suspicious of automated load testing.

Functional test automation used to focus on test-data generation, meaning populating a database with information that would then exercise new logic. This is still a useful approach for applications that use database resources, and it can be used in testing online applications if the resulting test data can be injected via a GUI.

Test automation advantages in CI/CD

Automating functional testing is almost mandatory in continuous integration/continuous development (CI/CD). Manual testing is too time-consuming to set up and run, plus it tends to be inconsistent in testing all paths, changes and interactions in an application. Test automation advantages in these situations include reduced testing time, and increased consistency and auditability.

The biggest downside to functional test automation is that test scripts and data prepared by developers tend to validate the developers' thinking rather than the application.  As a result of this self-validation problem, many companies say they detect more problems when they test manually. This makes automating functional testing less popular with both CIOs and line departments.

The key to solve this developer bias is to have independent user testing teams, supported by low-code-like tools such as SmartBear's TestComplete or Katalon Studio. Some products play back scripts to create replicable tests, and others allow for the entry of data properties in forms, from which test data is generated. Either approach works, but script-based tools require recording actual transactions if they are to be effective.

Test automation will not solve all your problems; it will give you different problems to solve. Most of all, testing is about being systematic, whether it's done manually or through test automation. The test must align with software, even as it changes as well as with usage patterns -- experienced or expected. Without this, testing will never serve its goals.

Dig Deeper on Automated and autonomous testing

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

SearchCloudComputing

SearchAppArchitecture

SearchITOperations

TheServerSide.com

SearchAWS

Close