By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
I'm sometimes criticized for taking a more heuristic approach to performance testing as opposed to a more mathematical one. My response has always been that I'm biased toward a commercially driven environment where even heuristic approaches frequently take more time, require more information and need specialized expertise the teams don't have. Adding complex mathematic equations that require more time, fewer unknowns and (in many cases) exceed the mathematic training of all of the performance testers in the team simply isn't likely to make matters better.
Even so, I think I have finally come up with a significantly complex mathematic formula for my critics:
For those of you who have forgotten (or never had a need to learn) integral calculus, the formula roughly says this:
"The potential for a successful performance testing project is a function of the lead performance tester's brain power, raised to his/her experience, plus an average of the remaining team members' brain power, raised to their experience, multiplied by a factor of the application under test's availability and usefulness, multiplied by a factor of tool availability and usefulness over the average availability of users, the team's business knowledge, the usefulness of policies and procedures, the usefulness of project management, and how similar the test environment is to the production environment all integrated over the duration of the project, plus or minus whatever matters but isn't in this formula."
The maximum value for the formula within the function is 100; the minimum is 0. Based on my experience, a good score would be 50, and a typical score would be 10. "+/- X" acknowledges that whatever the function predicts could be wrong.
OK, I admit it. I'd never use this formula. In fact, the entire notion of a magic formula is intended as a parody. I came up with this while reviewing a paper from an IEEE conference. The paper is now several years old and was related to the performance of software systems. I'm not going to name the specific paper because it's not important, and I don't want anyone to think I'm trying to attack it. I'm not.
Formulas that look (and to me feel) like the one above are all over books and papers that claim to be written for performance testing practitioners. While I know that there are some performance testing practitioners who work in environments where formulas that look like this are not only valuable but also essential to their job, my experience suggests that these are the minority.
That being said, the formula is neither shallow nor is it entirely without merit. If we look closely, it calls out many of the reasons that performance test projects aren't as successful as we would like them to be. Consider the following points that the formula makes:
- The single most effective way to improve your chances of success is with a knowledgeable and experienced team.
- Consistency in the team, tools, processes, etc. over the life cycle of the project (especially if they are consistently high) is crucial.
- Having no application to test (or limited availability of the application, or the application being so broken that most performance testing is either impossible or irresponsible) can destroy efficiency.
- Good tools that the team knows how to use can make a significant positive difference.
- Bad tools that the team is forced to use, but doesn't know how, is worse than having no tools at all.
More information about performance testing Testing for performance, part 1: Assess the problem space
Testing for performance, part 2: Build out the test assets
Testing for performance, part 3: Provide information
- Management, processes, business knowledge, environments and real end users can all help or hurt some by themselves, but none of them can make a huge impact either way on their own.
- No matter what we do, there will always be an unexpected "X factor" that crops up during the project that, at least temporarily, can dramatically impact our previously estimated success potential.
Feel free to use my formula if it helps you communicate the relative importance of some aspect of performance testing to your team. If you use it, remember that every input value (with the possible exception of time and the number people on the team) is an estimate that you created, making the formula no more accurate than your input data. I suspect that you'll get more value out of using the formula to demonstrate that even the most complicated looking formulas won't relieve you of the need to think.
About the author: Scott Barber is the chief technologist of PerfTestPlus, vice president of operations and executive director of the Association for Software Testing and co-founder of the Workshop on Performance and Reliability.