What are the different types of performance tests that should be considered with Web applications?
I find the challenging part about answering questions sent by email or entered on a Web page’s “Ask your question” bucket is clarifying what the person is looking for to give an answer that is not painfully generic. When asked about “performance testing” for any environment, I generally try and make sure we are talking about the same thing. Sometimes, (a lot of times) we are not.
My working definitions are as follows:
- Performance testing looks to validate speed, scalability and/or stability.
- Load testing looks to validate behavior under normal and peak load conditions.
- Stress testing looks to validate the behavior when the application is pushed beyond normal load conditions.
- Capacity testing looks to find the number of users or transactions the application can support and still meet published or promised targets.
Given these as definitions, I recommend that each type be considered for any application, no matter the environment. Having said that, the need for each and every type of testing will almost certainly change based on the nature of the application.
For example, an online ticket outlet will have far greater need to know its maximum load and stress capacities where as an online sheet music publisher will have a less critical need for this. To determine these breaking points, and to have a plan for scaling the resources for the application, capacity testing may help find the size of nature of expansion that may be needed say, for the next concert tour of the television singing competition winners.
The hard part about this testing is that it can be hard to convince people of the need. Stress tests tend to be unrealistic by their very nature. Therefore, sometimes you will run into people arguing against them as unneeded and wasteful of precious resources. In some instances they may be right. In others, they are probably not.
Another caveat, without running on platforms representative of the production environment, such tests will only give a “snapshot” of the overall performance. While some learned experts have explained to me how in certain circumstances you can extrapolate results, I have not encountered those conditions, nor have I been able to successfully apply them to the systems I did test.
Finally, as with all testing, it is impossible to test a system “completely.” Given that the impossibility of testing all of the reasonable combinations of variables, scenarios and situations leads some stakeholders, if not entire organizations, to question the value of performance testing at all. However, if you can describe some basic information and trends around things such as response time, the system’s ability to handle various loads and the like, and show trends between versions or releases, you will be able to give a reasonable answer to the general question, “Is the performance better than it was before?” My experience has been that making a reasonable effort will show weak points in the system and reduce the chances of catastrophic performance failures.
Dig Deeper on Topics Archive
Related Q&A from Peter Walen
Software testing veteran Peter Walen explains how software testers can write test scripts that others can follow without having to test by rote. Continue Reading
Defect tracking can be tedious, yet comparing tracked defects can also help testers improve their work. Expert Pete Walen explains how the reasons ... Continue Reading
While many organizations may look for simple ways to measure progress, it is important for project managers to fully interpret and understand test ... Continue Reading