carloscastilla - Fotolia

Tip

Craft an integration testing plan with quality before quantity

When it comes to integration testing, the end user is more important than the test volume. Increase test efficiency to please both application users and owners.

Integration testing can provide exceptional results from limited team members and within the short burst of testing time allowed in an Agile approach. An effective integration testing plan ensures that application code performs its intended function from end to end.

Integration tests differ from functional tests in that they cover a combination of functions at the same time. QA engineers must craft an integration testing plan that finds defects in the way application components interact with each other, which, in turn, improves the end-user experience.

Don't lose the end user

When they engage with an application, most end users can ignore the occasional random imperfection, such as a missing icon or other annoying display issues that don't interrupt their workflow. However, when a function fails, it can cause rapid customer dissatisfaction.

In modern, digitally reliant businesses, application workflow interruptions simply cannot happen. It's easier than ever for a dissatisfied customer to bail on a web- or mobile-based application in favor of an alternative. Customers simply won't use applications when defects disrupt their work on a repeated basis.

When Kent C. Dodds recommended integration tests as a means to increase QA efficiency where it matters most to customers, it ruffled some testers' feathers. In a blog post, Dodds wrote about automated tests built into the code base. While this approach can improve the end product for the customer, you'll have fewer test case numbers, which can be a big deal for software development leaders whose idea of quality means a high number of tests executed -- regardless of value or type.

That said, if you want to get more out of your QA testing, regardless of type, focus on integration and workflow. The following two QA testing focal points are popular and fit into an integration testing-first model.

Test case numbers

Many software development teams use test execution numbers to measure their test effort and application quality. But, in the grand scheme of an integration testing plan, what do those numbers really mean? After all, if you execute 30,000 tests a month and 70% or more of those are either not valid or duplicated, the volume of tests is meaningless.

Execution of automated or manual tests does not guarantee application quality. QA teams should narrow down the testing suite to include only valid, useful test cases.

It's not the number of tests that matters; it's the value of the tests.

First, investigate how many tests have logic in them and how many tests thoroughly evaluate the business logic of the application function as a customer would use it. Keep in mind that an application might have both internal and external customers -- the latter come first, but the former are also an important consideration.

"You get diminishing returns on your tests as the coverage increases," Dodds said. "When you strive for 100% test coverage, you spend time testing things that really don't need to be tested. They don't have any logic in them at all."

Write code integration tests in quantities that cover the application logic, which is what customers use to accomplish their work tasks. The same is true for manual or end-to-end testing -- it's not the number of tests that matters; it's the value of the tests. You can execute an effective integration testing plan that only requires 20 useful executions and feel confident that your customer workflows function as expected.

Focus on integration

QA testing is expensive. When you pay developers to write tests, it takes time and resources away from new development initiatives. Exceptional QA testers find workflow defects each and every release, because they look beyond the surface and dig deeply in various kinds of workflow scenarios. But this defect hunt comes with a tradeoff; it's not a quick process, and time is money.

While it shouldn't take your QA engineers months or weeks to test a release, the process should always be continuous. In an Agile world, there's no room for lengthy test execution runs that don't yield defects.

As Dodds pointed out in his testing pyramid description: "As you move up the pyramid, the confidence quotient of each form of testing increases. You get more bang for your buck. So, while [end-to-end] tests may be slower and more expensive ... they bring you more confidence that your application is working as intended."

Continuously test integrated workflows. Use unit tests for automated integration, and even end-to-end testing, as they will find more defects in both the long and short terms. The goal of a solid integration testing plan is not to run 10 tests or 100 or 1,000; it is to have a functioning application, regardless of how many changes occur.

Dig Deeper on Software testing tools and techniques

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close