Manage Learn to apply best practices and optimize your operations.

Four ways to approach integration testing

Although there are several different approaches to running integration tests in software, many of them are performed incorrectly. Software expert Chris McMahon explains how to run integration tests correctly in this tip.

There are four ways to go about the business of integration testing. One is the wrong way; the other three are...

much more interesting.

The wrong way

J.B Rainsberger defines integration testing like this:

"I use the term integration test to mean any test whose result (pass or fail) depends on the correctness of the implementation of more than one piece of non-trivial behavior."

When you hear him talk about implementing such tests, what he means is that writing unit-test-style assertions against the behavior of more than one class at a time will eventually lead a developer to madness. Such tests tend to multiply, take a long time to run, be difficult to debug, and ultimately deliver very little value. For the whole story see Mr. Rainsberger's blog.

The problems come when we treat integration tests as if they were unit tests: with their only value being to the programmer; running them in conjunction with every build; managing them as if they were unit tests; under such circumstances the value of the integration tests will be minimal, and the cost of them very high.

But there are valuable approaches to creating and managing integration testing. What Mr. Rainsberger's description of a typical approach to integration testing lacks is a sense of purpose, a heuristic that guides the integration tester to choose appropriate tests. With such heuristics in place, integration testing can be quite valuable.

API testing

More and more we see applications exposing an Application Programming Interface, or API. An API is most often public; it allows outside parties the ability to control some part of the application from those parties' own programs. For example, you can write a program that will use the Twitter search API to search Twitter for posts containing some phrase of interest; when you get the results, you can update a page on a wiki with the content from Twitter via the API for the wiki.

API testing is a valid and valuable approach to integration testing. An API allows users to write a program to achieve some sort of business function: retrieve data, update information, monitor status, etc. Since APIs are not used by human beings, any changes to an existing API has the potential to wreak havoc among those using that aspect of the API. Since business functions tend to exercise significant portions of the code base at once, it behooves the supplier of an API to have robust regression tests; not for random combinations of classes in the code base, but instead tests to validate each business function exposed in the API. These business functions must continue to operate correctly as the API is maintained and expanded.

Acceptance test driven development

Another valuable approach to integration testing is Acceptance Test Driven Development, or ATDD. With ATDD, before writing any code, the business will define a set of examples that the code is intended to implement. A simple example would be something like:

GIVEN Jane has a bank account with a balance of $100.00
WHEN Jane deposits $20.00
THEN Jane's balance is $120.00

Then the object for the developer is to make that business-facing example pass, while using unit tests appropriately in the process. Of course, ATDD can become quite complex very quickly:

GIVEN a family of four has insurance policy A
WHEN one dependent enters college
AND they add another dependent upon having a child
AND the other dependent is diagnosed with leukemia
THEN their premium increases by $257.63 per month

An acceptance test such as that will certainly exercise a lot of integrated code.

User Interface tests

User Interface (UI) tests are those tests that manipulate the application in the same way that a user does. Unlike every other kind of test mentioned here, UI tests are the only ones that navigate the application. Although UI tests typically navigate a certain path, and upon reaching the end of that path assert something about the state of the application, a significant portion of the value of UI tests lies not in their assertions, but in the navigation itself.

Tests that navigate the user interface are valuable because such tests expose inadvertent errors in parts of the code base that are *not* under scrutiny by the test itself. For example a user might wish to see a record on the third page of a long list of records. Every other kind of test will assert that the record exists in the proper place in a list of records; only a UI test will show us that the "Next" button is broken.

Managing integration tests

All of the approaches to integration testing share some characteristics: they are slow compared to unit tests; they test not only the code itself but the environment in which the code operates; and they exercise interfaces distinct from the code itself. Integration tests are completely distinct from unit tests, and managing such tests demands an approach different than managing unit tests.

Since integration tests are highly decoupled from the actual code itself, it makes sense to run integration tests outside of normal Continuous Integration (CI) processes. The public APIs, the business-logic fixtures, and the UI should change rarely, regardless of the state of the underlying code base. For this reason it makes sense to have dedicated test runners for such tests, and separate status reporting procedures as well. Re-factoring the code base should not cause integration test failures. At the same time, there may be legitimate failures of the integration tests even if the unit tests all pass. Coupling the running of the integration tests to state of the code base is a poor practice. Instead, treat integration tests as a source of information apart from the normal CI process.

Choose your tests wisely

As Mr. Rainsberger says, in the course of writing unit tests, if you find yourself tempted to make assertions about the behavior of more than one class at once: stop. That is the path to a suite of tests that is fragile, slow, and unmaintainable.

Instead, consider adding different styles of tests to the regression test suite. If the behavior to be tested is exposed in an API, test it there. If the behavior to be tested is important to the business, work with the customers to implement some reasonable acceptance tests. If the behavior to be tested has to do with how a user interacts with the application, add some judicious UI tests.

Next Steps

Automating the API code generation process

Dig Deeper on Topics Archive

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Honestly, I don't consider UI or ATDD to be integration tests.  End to End Tests, scenario tests, workflow tests... and when I say tests, I mean the smaller subset that is checking.  

An integration test, to be an integration test, must look at the interaction point between two objects, either concretely or abstractly.  Now the API Layer could be considered that, if you then go underneath the API to see if the change propagated to the data access layer (the database or data store.)

Too many people think integration tests are about tying everything together, but they offer little value when used in an end to end situation.  An integration test, would more than likely look like a call to an API to generate a token for access, and then using that token to see if it can access the service, assert on some attribute and return.

That's a very good article though I wouldn't agree that it's "four ways". The applications may have different interfaces: API, REST, etc - and integration testing should cover what's available.

The article speaks of the purpose but doesn't speak of the risks. Risk assessment is the tool testing should use to prioritize the activities and turn infinite number of tests to reasonable.