What a simple question! I even have a simple answer.
"To determine or estimate various performance characteristics under various conditions."
The problem is that answer is virtually useless unless we also know what performance characteristics are interesting to whom and for what purpose. Worse, more often than not, the folks who ask us to do the performance testing fundamentally don't know what they want to know and don't know what we can reasonably provide. They also don't understand that how the results are going to be used significantly impacts which tests we run and how we design them.
In my experience, when I ask stakeholders what the goals of the performance testing effort are, I generally get one of three answers:
- You're the performance tester, you tell me.
- Tell me how many users/orders/customers we can handle in production.
- Make sure it will be fast enough.
Needless to say, these answers aren't only as useless as my response about determining or estimating performance characteristics, but the second two are practically impossible, since we almost never have either the data or the equipment available to accomplish those missions reliably. Somewhere between "virtually useless" and "practically impossible" there must be some reasons for testing performance that are both useful and possible. If there weren't useful and possible reasons for testing performance, we wouldn't still be doing it. (I hope!)
As it turns out, the key to my coming up with a model to explore that middle ground was to stop thinking about performance testing as a "testing effort" and start thinking about it as a "business effort." Once I made that shift, I was quickly able to identify four groups of "for whom" with common "for what purposes." Since then, I've found that using this model to frame conversations about prioritizing performance testing objectives fundamentally changes the discussion and increases the value of the performance testing effort. It also reduces wasted effort from designing tests to collect one class of results only to find out that an entirely different test was needed to provide the class of results that stakeholders wanted, but were unable to articulate until after they were presented with the less valuable results.
The four most common groups of "for whom" are business stakeholders, developers, end users, and regulatory/compliance inspectors. While it's certainly common to conduct performance testing that serves more than one of those stakeholders at a time, let's look at each individually to see how their objectives differ.
High-priority performance testing objectives that support business stakeholders include the following:
- Capacity and scalability estimates
- Comparisons vs. competitors or current systems
- Anticipated degree of user satisfaction
- Likelihood of expensive or embarrassing failures (with mitigation recommendations)
- Compliance assessments
- Other information that improves go-live decisions
High-priority performance testing objectives to aid developers include the following:
- Build-to-build performance trends
- Architecture and design model validation
- Configuration option comparisons
- Resource utilization patterns under load
- Other information that helps developers assess and improve performance as they develop software
High-priority performance testing objectives on behalf of end users include the following:
- Assessment of acceptability and consistency of response times independent of load
- Assessment of stability and functional integrity independent of load
- Assessment of performance acceptability of recommended client hardware/software/configurations
- Assessment of other performance characteristics likely to reduce quality in the eyes of the end user
High-priority performance testing objectives when preparing for regulatory/compliance inspections include the following:
- Testing and documenting in compliance with relevant standards
- Determine compliance with relevant criteria
- Building and executing tests that replicate regulatory/compliance tests
- Support tuning to achieve regulatory/compliance criteria
Using something similar to this list as a reference the next time you talk with stakeholders about their objectives for the performance testing effort will lead to a better conversation than you're probably used to.
At the end of the day, there really is only one reason to test for performance, and that reason is to provide value to our stakeholders. Recognizing that and shifting our discussions accordingly can only increase the value we provide through our performance testing efforts.
About the author: Scott Barber is the chief technologist of PerfTestPlus, vice president of operations and executive director of the Association for Software Testing and co-founder of the Workshop on Performance and Reliability.
Dig Deeper on Stress, Load and Software Performance Testing