The focus of performance testing is to validate that the service-oriented architecture (SOA) solution will meet the performance requirements of the business under expected loads. And the focus of stress testing is to determine what volume of load the SOA solution can withstand before it fails -- failure being defined as an inability to meet one or more performance requirements.
SOA -- Example
In our previous articles -- "Use functional and regression testing to validate SOA solutions" and "Unit, integration testing first steps toward SOA quality" -- we presented a simple SOA solution: We looked at a simple and rather "coarse" (large complex services) example of an SOA application landscape. The SOA solution addresses the need to sell digital media online. Service layers consist of a Web-enabled presentation layer, customer account service, catalogue service, cart service, digital fulfillment service, customer history service, and an accounting service that interfaces to a standard financial services database. The following figure illustrates this SOA solution. We will continue to use this as our basis for discussing SOA performance and stress testing.
We will address SOA performance and stress testing using a simple set of business events that represent the typical events of a day-in-the-life of this SOA solution. In this case we will address the following:
- Member login and catalogue browse
- Non-member login and catalogue browse
- Member login and catalogue purchase
- Non-member login and membership application
There are obviously several more business events, or threads, that would make up this application landscape, but this gives us enough for the purposes of discussion.
SOA –- Performance and stress test planning
Performance or stress/load testing requires a team that has an in-depth understanding of the systems: hardware, software, firmware, protocols, transactions, and the business. Unless capacity testing is an ongoing exercise for the project/system owner, this type of talent does not reside in one organization. The test manager must work with his peers in operations, development, testing, and any third-party IT providers to bring a team together that can address all aspects of the system. This is really no different than any other performance and stress testing engagement, but it is critical that effective planning occur because of the dispersed nature of any SOA application landscape.
The complexity and fluid nature of an SOA solution requires the performance/stress testing team to take an evolutionary approach to testing that moves from single threads to multiple threads and finally to typical day-in-the-life performance/stress testing. Unlike more traditional application landscapes, this should become a constantly evolving solution that supports repeatable performance testing. That's because what works within the current context of the SOA solution may not work for the next iteration of the solution. Remember, this is a constantly evolving solution set.
SOA -– Single thread
One of the challenges the SOA application landscapes bring to bear is the dispersed nature of the solution. One approach that will simplify the initial performance and stress testing effort is to deal with each business event or business thread in isolation -- a single-thread approach. It is not recommended that this be the extent of any performance/stress testing effort, but it does enable the organization to validate the stability and robustness of services before the entire application landscape is available, and it greatly simplifies both monitoring and troubleshooting. This is really the equivalent of unit and integration testing from a performance/stress-testing perspective. The danger now becomes the possible interactions and interdependencies of unrelated business events.
These single-thread scenarios should become standalone components to an overall performance/stress testing solution -- basically a modular set of performance testing tools that can be quickly adapted to measure any combination of business events.
SOA -– Multi-thread
Once the performance/stress testing team has a series of related single threads validated and any required tuning of the application landscape has occurred, then multi-thread performance/stress testing can begin. The multi-thread approach assumes that some level of single-thread testing has occurred, that appropriate tooling from both a monitoring and load perspective are now available, and that related business threads have been identified. The team selects business threads that cross common services but perform loosely coupled activities.
For example, customer history service tracks both casual catalogue browsing (non-purchase event) and catalogue purchases (purchase event). The team can now determine the cross-impacts of loosely coupled activities on the SOA application landscape and support any required tuning of the application landscape. Once again, you treat each scenario as standalone components that are in the process of becoming an effective toolbox that can be quickly adapted to measure any combination of business events (both anticipated and unanticipated).
SOA -– Day in the life
This is the typical performance/stress testing exercise of projecting what the expected business against the architecture will be and then simulating that load. During the simulation, the performance/stress testing team -- with the assistance of several stakeholders -- measures the performance of the architecture under load.
It can become very difficult to determine the source of performance failures within the context of an SOA solution. In fact, if earlier single-thread and multi-thread performance testing has not already occurred, it can be impossible. By building up a modular set of tools (scenarios), the team can more easily adapt to the quickly changing aspects of the SOA solution. Basically the "all-in-one" solutions that have been applied to centralized applications in the past can quickly become inoperable within the context of a quickly evolving SOA application landscape.
SOA –- Third-party services
One of the greatest challenges in the SOA application space is the ability to use, and therefore be vulnerable to, third-party services. From a performance/stress testing perspective this can be the most problematic areas to test. How does one test a service that can be accessed only via production in a 24/7 world? The short answer is you cannot, but the organization can take steps to reduce or isolate the impact of performance failures on the part of third-party services.
First, any third-party services should not be mission-critical. If they are, they must be supported by a service-level agreement (SLA) that is supported by a well-documented set of performance/stress tests. Non-critical third-party services should be accessed in such a manner that failure to respond in a timely manner results in a trivial failure that does not significantly impact the user experience. This ability should be tested by a performance scenario that dummies out that service.
Finally, even though you cannot test the third-party service, you can test the third-party service with the cooperation of the service provider. If the provider is unwilling or unable to cooperate, your organization should consider finding an alternate provider.
SOA -– Conclusion
The constantly evolving SOA application landscape helps meets the needs of today's complex business solutions and competitive time-to-market needs. It also presents a new challenge for professional testers. Although there is no single solution to that challenge, there are practices that will help.
The days of waiting to address the testing challenge to the last minute and then simply throwing untrained or junior bodies at it needs to end. Instead a disciplined (methodical), requirements-centric, and tool-enabled approach to testing needs to be adopted that supports both testing flexibility and test artifact reuse.
About the author: David W. Johnson is a senior computer systems analyst with over 20 years of experience in information technology across several industries. He has played key roles in business needs analysis, software design, software development, testing, training, implementation, organizational assessments and support of business solutions. David has developed specific expertise over the past 12 years on implementing "Testware," including test strategies, test planning, test automation and test management solutions. You may contact David at DavidWJohnson@Eastlink.ca.