There is no accurate way of estimating this, frankly, that is better than testing the production environment itself....
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
One highly theoretical approach to sizing which I have been involved with in the past uses the concept of CPU cycles per action to calculate total performance capabilities. The concept is that each action on a site uses a consistent amount of CPU cycles in each tier. If you can isolate the action and record its cycles, then extrapolate this out to a set of common usage patterns, you can arrive at the number of cycles needed for a given time frame. Theoretically, that cycle count can be mapped to processor, RAM and other hardware constraints. In my opinion, the time and effort required to gather this level of detail is greater than simply testing the site.
A second approach is to use a guesstimate ... if your production site is 2x your test site, you can assume it'll handle 2x the load. This, however, is also highly theoretical. There are so many variables at play. First of all, what does 2x mean? Twice the number of servers? Twice the RAM per server? Twice the CPUs? Twice as many U's taken up in a server rack? Also, production sites and test sites can have numerous subtle differences, including network subnet traffic, machine services (sometimes test machines are running different services such as an antivirus or firewall application), machine configuration, etc.
The best approach is to schedule and carry out a baseline test, where you run similar performance tests in your test environment and your production environment. This can be inconvenient to the user and the testers involved, but it is the best way to establish a multiplication factor. On a Web application I used to own, we did all of our performance testing between 11:30 p.m. and 5:00 a.m. (our lowest service window). Our firewall served up an Unavailable message and all traffic was shunted away. If you can carry this out, and schedule it for regular intervals (say every 12 or six months) it will help you understand the relationship between both sites as well as how that relationship changes over time.
Dig Deeper on Software Testing and QA Fundamentals
Related Q&A from John Overbaugh
Learn what's behind AWS outages and how to fix failures before they happen.continue reading
Learn strategies for best security test strategies for SaaS cloud.continue reading
Expert John Overbaugh identifies the three top concerns of the test manager and offers advice on how to stay ahead of the curve when it comes to ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.