Ask the Expert

Mapping results in test environment to production environment

We are currently executing tests in an environment that is approximately 50% of the size of production. We need to map these test results to production. As an example, if one of the processes is returning an average response time of 3 seconds with average hit rates of 100 per hour, how will the same process behave in the production environment?

Requires Free Membership to View

There is no accurate way of estimating this, frankly, that is better than testing the production environment itself. One highly theoretical approach to sizing which I have been involved with in the past uses the concept of CPU cycles per action to calculate total performance capabilities. The concept is that each action on a site uses a consistent amount of CPU cycles in each tier. If you can isolate the action and record its cycles, then extrapolate this out to a set of common usage patterns, you can arrive at the number of cycles needed for a given time frame. Theoretically, that cycle count can be mapped to processor, RAM and other hardware constraints. In my opinion, the time and effort required to gather this level of detail is greater than simply testing the site.

A second approach is to use a guesstimate ... if your production site is 2x your test site, you can assume it'll handle 2x the load. This, however, is also highly theoretical. There are so many variables at play. First of all, what does 2x mean? Twice the number of servers? Twice the RAM per server? Twice the CPUs? Twice as many U's taken up in a server rack? Also, production sites and test sites can have numerous subtle differences, including network subnet traffic, machine services (sometimes test machines are running different services such as an antivirus or firewall application), machine configuration, etc.

The best approach is to schedule and carry out a baseline test, where you run similar performance tests in your test environment and your production environment. This can be inconvenient to the user and the testers involved, but it is the best way to establish a multiplication factor. On a Web application I used to own, we did all of our performance testing between 11:30 p.m. and 5:00 a.m. (our lowest service window). Our firewall served up an Unavailable message and all traffic was shunted away. If you can carry this out, and schedule it for regular intervals (say every 12 or six months) it will help you understand the relationship between both sites as well as how that relationship changes over time.

This was first published in April 2009

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: