Melpomene - Fotolia
End-to-end tests are typically done through the UI, mimicking what a customer sees. For apps built of microservices, this type of testing is a bit more complicated. Microservices need more than conventional end-to-end testing, reaching into areas such as API contract tests and testing in production.
Conventionally, testing as a user involves at least five steps:
- Log in.
- Create a session ID.
- Navigate to a place in the UI.
- Send input to test something.
- Check the result.
To test as a user for an e-commerce site, all a tester might do is: find a product, add it to the cart and check out. These steps seem simple, but there's a lot that happens behind the scenes. A modern application might call 20 web services on the front end, and a complex system that must coordinate with a legacy back end and a service bus might call 50.
When something goes wrong, the test fails but doesn't explain why. A QA professional must look at the results, perhaps watch a rerun, isolate the microservice, reproduce the problem and then file a ticket. This end-to-end process is slow and expensive, and it requires a full-blown test environment. The failure can even come from a false error, due to a database down for maintenance or another microservice's update.
There are better approaches for end-to-end testing for microservices architectures. This article covers mocking with subsystems for tests, API tests that focus on the consuming service, incremental and continuous microservices deployment and tests that can run in production. End-to-end testing microservices is expensive, slow and error prone, but these strategies enable equally effective QA with less work.
Each microservice performs an independent task but has dependencies on other microservices to complete workflows in the application. To test a microservice, you must enable it to work with these other components.
Subsystem testing means that the testers clone any subsystem with which a microservice interacts. It works like service virtualization or other mocking techniques to enable realistic tests while isolating one microservice. Set up subsystem tests, then automate and run them in a CI/CD pipeline.
The re-creation of a microservice's dependencies for on-demand testing also helps with exploratory testing -- and when there's a need to reproduce, fix and retest issues found in production. This testing is end-to-end from the perspective of the consumer of the tests.
Consumer contract tests
APIs connect a web service with consumers. Many software development teams create a web service with only its own function and priorities in mind, which causes misunderstandings for API consumers. For example, a development team might publish a spec and API to search products on an ecommerce site, but how the other teams consume that API might not be part of the design and development picture.
Contract testing tools like Pact create tests based on how the software under test uses an API, rather than focusing on the API provider. The tool, for instance, can scan the codebase and decipher how the application calls the API and the expected function signature result. When it completes the analysis, Pact can spin up a test server that acts in the same manner the consumers expect the microservice to act, to integration test the consumers. The tool also produces a set of expectations against which to test the actual microservice.
Pact's testing is limited. Without a database included to mimic real user scenarios, the tests cannot account for the data contained in a tested workflow. Contract testing does give microservices testers a way to catch when a function signature changes that will cause dependencies to fail.
Microservices deployment restructures
Many organizations manage the deploy at the end of a sprint or a product increment, taking a Waterfall approach to the process. This method often entails a great deal of frantic development, followed by a regression-testing day -- or week, or sprint -- at the end.
A group deploy at the end of development has one big advantage: All changes to microservices are made simultaneously. A team can coordinate changes to the API so that the relevant systems already work with the change to the microservice on all consumer sizes. This reality is, of course, also the drawback. End-to-end testing microservices is required for such Waterfall-style deployments; these tests will probably need to happen in a test environment, with all the drawbacks discussed above.
Another option is to deploy every microservice separately. To make it work, implement a rigorous set of tests, that includes consumer tests; a build must pass for the team to deploy it. Run tests in production that explore the application's end-to-end functionality. If the tests fail, examine what changed since the last build passed, and either revert those changes or issue a fix. This continuous approach to microservices deployment creates resilience. When the search service has a bad rollout, for example, all that can break is search. The search microservice developers can revert to the last known-good version, or debug and reissue the update.
End-to-end testing microservices in production
To test individual microservices releases in production, the testers could restructure overall end-to-end tests as four or five different checks. Inject credentials or use URL guessing and a test can check just the relevant part of an application: This is called a DOM-to-database test. If a team uses known-good test accounts, they can set up tests to run on a loop in production. I estimate this tactic can yield 70%-90% of the value of full end-to-end test strategies at only 10%-20% of the work.