You're a dedicated performance tester on a project. Whether the performance requirements are given to you or you...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
must come up with your own, you are up for some major challenges. If you're at a loss for how to come up with high-quality performance requirements or even how to begin, read on.
In the case of predefined requirements, my experiences have been mixed to bad. Performance requirements are difficult to determine especially on new projects. They are usually based on guesswork and estimates that reflect more fantasy than reality or common sense. They are often found in requirements documents that are written at a point in time when it's not even clear how a project should work.
It takes time, patience and research to come up with figures and sensible data that has meaning. Most requirements are determined by stakeholders within minutes and it shows with the lack of quality. Common obstacles are contradictions, gaps in definition, unquantifiable requirements, missing reference to legal requirements, monitoring issues and a lack of understanding of how outside systems are to be dealt with. On top of all that, language when describing performance is wrought with misunderstandings and misinterpretations.
1. Don't copy requirements from a different source
One thing I can safely say is that you shouldn't copy anything from somewhere else. The search for (or validation of) requirements is a major step in getting to know your application and what is important to the test.
2. Use common terminology
One thing I try to consolidate first is a common understanding of terms as used for describing performance. What does performance mean in this project context? What is stress? What is concurrency? What is a transaction? It helps with mutual expectations. Don't be shy to start discussions about or around requirements with a language definition session: it is vital to the successful definition of requirements. Put those definitions in your test plan, strategy or other documentation you generate. Go through this exercise for each project. Don't just copy your text because the context changes and your use of language might not be the same.
3. Communicate with stakeholders
The next important step is to communicate with stakeholders and other oracles and ask them what they think performance is. If you already have requirements, take them along and let the stakeholders explain what the requirements mean to them. Don't be surprised if interpretations are wildly different. Otherwise ask things like, "What are the common business cases?" and "What is the core functionality?", "What is important to be fast from a business perspective?" and "What must not ever fail?" Also ask what statistics are known already. These can be the number of users, the number of users that might use the application at any point in time, or logs and usage patterns from previous projects. All these facts start to paint a picture which the requirements should reflect.
4. Research project history and architecture
Have a look at the project history and documents. Key things to look at are the infrastructure design and architecture documents. They can tell you what's involved. Have a talk to the architect and discuss why certain products were chosen or if there were any decisions based on performance.
Get the specifications for hardware components and software used. Do some research on these and see if they have known limitations and whether there are tuning guides. Nearly every enterprise software application and some hardware devices come with one. Have a close look at network speeds, firewalls and load balancers and their theoretical maximum capacities. Then sit down with a calculator and calculate if any of these clashes with the requirements you already have or whether they form technical limitations to your requirements.
You should now have a huge amount of data from which to either start defining your requirements or to add detail on the existing requirements.
5. Write quantifiable, detailed requirements
A good requirement is quantifiable and defines at least context and the expected throughput, response time (preferably as percentile), max error rate, sustained amount of time and whether the requirement is related to load, stress, or performance. Then repeat this until the stakeholders expectations are sufficiently described by the requirements.
Make sure the context describes such things as functions executed, time of day (if things like batch processing, backups or other services can interrupt), environment, configuration or settings, number of users, scalability, expected maximum load of systems, and anything else that can influence your results.
A simple example for one requirement:
"The logon to the application must have a 90th percentile of no more than 3000ms when executing five concurrent logons. This must be sustainable for 15 minutes where the load on server resources may not be more than 75% on average."
Requirements that say "All Web pages..." or "This application will..." should be treated with care. In order to prove such a requirement, the entire application would need to be tested. Since you are detailing a requirement, check if you can make the requirement more specific.
Unquantifiable requirements can also be troublesome. Examples are "must work faster than product X" or "must perform at least as well as the previous product." These requirements sound easy but can be hard to prove, especially if there are no reliable values for the "other" product or previous version.
6. Start with a baseline
As you can see, the above can get complex very quickly. This is the Achilles' heel of defining performance requirements at an early stage in the project: Most of the details are still unknown. The effort required to write good and solid requirements can therefore be immense, throwing up the question of whether the effort invested into getting them defined is worth it.
I usually deal with medium sized projects where budgets are tight. So I offer another option which I call performance investigation rather than performance testing. The difference is that the objective of a performance investigation is to find out what the performance is and define requirements from there. The point of this is that you cannot have requirements until you've tested something. The only work needed upfront is to determine which parts of your application are performance-relevant.
One premise for this approach is that there needs to be a first iteration that does not have performance requirements or only has rough performance expectations.
Once this iteration is done, the measured performance builds the baseline for future releases. From the baseline we can glean the requirements which we can extrapolate to the stakeholders' expectations for the final product. We can now plan much more easily and precisely.
Even with this approach, the defining of requirements should still include all of the above, but it is made much easier to deal with because now there is a point of reference which makes everything more tangible.
If you include some tests with users in this iteration you could get some feedback on what they expect and where the iteration ranks. This might influence the requirements, which in turn, lowers the risk of failing acceptance.
The downside of this approach is that it opens a hole for re-negotiation of project scope and cost. It can increase risk and will have an impact on the project timeline. The upside is, though, that time is saved defining requirements upfront and that the requirements generally become easier to define at a later stage.
7. Re-calibrate requirements based on your testing
Don't forget to re-calibrate requirements and your testing with reality. Once real life log data is available, a little effort should be expended on comparing these to your tests. This can further set the requirements in relation with reality. It can also help with determining proper performance monitoring for production. Any future performance benchmarking exercises you might need to do would benefit and you might also be able to use the production monitoring to assess your results.
This is a high level text on performance requirements. I have made some assumptions and certainly left out areas of defining requirements. For me, though, the above has proven helpful for the majority of projects out there. It will get you most of the way to what you will need. I'd also love to hear from you with feedback and further questions you might have.
Oliver Erlewein has been in testing over a decade and is currently a test manager for DATACOM New Zealand. He specializes in performance testing and is active in the testing community, sharing his know-how and experiences.