Q

Mapping results in test environment to production environment

How do you map test results in an environment that is half the size of production to the production environment? Learn techniques for estimating production response times.

We are currently executing tests in an environment that is approximately 50% of the size of production. We need to map these test results to production. As an example, if one of the processes is returning an average response time of 3 seconds with average hit rates of 100 per hour, how will the same process behave in the production environment?

There is no accurate way of estimating this, frankly, that is better than testing the production environment itself. One highly theoretical approach to sizing which I have been involved with in the past uses the concept of CPU cycles per action to calculate total performance capabilities. The concept is that each action on a site uses a consistent amount of CPU cycles in each tier. If you can isolate the action and record its cycles,...

then extrapolate this out to a set of common usage patterns, you can arrive at the number of cycles needed for a given time frame. Theoretically, that cycle count can be mapped to processor, RAM and other hardware constraints. In my opinion, the time and effort required to gather this level of detail is greater than simply testing the site.

A second approach is to use a guesstimate ... if your production site is 2x your test site, you can assume it'll handle 2x the load. This, however, is also highly theoretical. There are so many variables at play. First of all, what does 2x mean? Twice the number of servers? Twice the RAM per server? Twice the CPUs? Twice as many U's taken up in a server rack? Also, production sites and test sites can have numerous subtle differences, including network subnet traffic, machine services (sometimes test machines are running different services such as an antivirus or firewall application), machine configuration, etc.

The best approach is to schedule and carry out a baseline test, where you run similar performance tests in your test environment and your production environment. This can be inconvenient to the user and the testers involved, but it is the best way to establish a multiplication factor. On a Web application I used to own, we did all of our performance testing between 11:30 p.m. and 5:00 a.m. (our lowest service window). Our firewall served up an Unavailable message and all traffic was shunted away. If you can carry this out, and schedule it for regular intervals (say every 12 or six months) it will help you understand the relationship between both sites as well as how that relationship changes over time.

This was first published in April 2009

Dig deeper on Software Testing and QA Fundamentals

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSOA

TheServerSide

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close