The only really good reason to be in this position is if all available tools were considered and none of them supported the application under test (even with customizations and extensions) and you don't have access to the skills (internally or externally) to build a tool of your own -– and that seems unlikely to me.
The simple truth is that if a company is building an application that is realistically expected to have enough users to justify the expense of performance testing, even if that expense is just the time of an employee, that company ought to be projecting enough revenue from the application, or lose enough credibility by having a poorly performing application, to justify either the cost of a tool or the risk (in some company's eyes) of using a free or open source tool.
Now, after saying all of that, I must admit I have found that the vast majority of value that is gained by quality performance testing comes outside of the load-generation tool. Some of my favorite techniques (assuming you are testing websites):
- If you are testing a website, odds are that you slice response times in half (sometimes more)
by performance testing the front end. For more on how to do that, see the article "Right
Click -> View Source and other Tips for Performance Testing the Front End" in the December 2007
issue of AST
Update magazine (PDF).
- Use browser plug-ins or online tools to capture page load times. Several such tools are
referenced in the article above.
- Ask functional testers and/or user acceptance testers to record their opinion about
performance while doing their testing. It may be useful to give them a scale to use, such as
"fast, acceptable, tolerable, annoying, unusable."
- Have the developers put timers in their unit tests. These won't tell you anything about
the user-perceived response times, but developers will be able to see if their
objects/modules/classes/functions/etc. take more or less time to execute from build to build. The
same idea can be applied to various resource utilization (like memory and CPU utilization),
depending on the skills and/or tools available to the development team.
- Employ what I frequently refer to as the "hire a bunch of interns method." Basically,
get increasing numbers of your co-workers to use the application during a specified period of time
and ask them to note both the response time (which is easiest to do using the aforementioned
browser plug-ins) and their opinion about the application's performance. (Give them the same scale
used for the functional and/or user acceptance testers.)
- Have special performance builds made with timestamps strategically output to log files. Analyze the log files build after build and track the trends.
This was first published in December 2008