In a recent CIO blog post on SOA Testing Best Practices John Michelsen relates a story of an ERP Order Management system that went live and then subsequently dropped orders for three months in production before someone noticed. It is a sad story, but one I’ve seen played out a couple of times myself. The story makes me think of two things, end-to-end testing and production monitoring.
Having worked on several projects that have involved some sort of SOA, we’ve always segmented our testing into three phases: unit testing, integration testing, and end-to-end testing. For me, unit testing isn’t just developer unit testing; it’s also the testing team unit testing the service to ensure it fits the specification/mapping document requirements. It’s testers hand-coding XML or recording SOAP regression test beds. Once it’s been proven out at the unit level, then we start plugging other services/applications into it and looking at how they interact. When we think everything’s rock solid, we then do some end-to-end tests, where we try to simulate business scenarios from start to finish (UI to UI if possible).
In addition to testing, someone on the project needs to be thinking about operations and how we’ll know what the health of these services are at any given point. Who is monitoring queue depth and message aging? What alerts are thrown and when? When an alert is thrown, who is notified? Each of these scenarios might also be played out either in the end-to-end testing that is performed or via performance testing.
At a workshop last year, I heard Ken Ahrens from iTKO present briefly on the “Three C’s” that John Michelsen also references in the CIO posting. It’s a useful model for talking about how SOA testing is different from some more traditional manual testing contexts.