Problem solve Get help with specific problems with your technologies, process and projects.

What to do when the test environment doesn't match production

There's no way to extrapolate performance results from a test environment to predict production performance. But software testing expert Scott Barber says there are other testing techniques you can employ to ensure high performance in production environments.

How can I predict the performance of an application in a production environment that doesn't match the test environment? Is there some kind of extrapolation I can do or formula I can apply?

There is no formula or methodology that is any more accurate than a guess if you are starting from load simulations. In fact, an educated guess is probably the most accurate prediction in this case. For all practical purposes, most organizations have no way to extrapolate performance results from a test environment to predict production performance. Even the best performance testers in the field refer to this kind of extrapolation as "black magic" unless they've been trained by Connie Smith, Ph.D., or Daniel Menasce, Ph.D.

That said, I am a proponent of not doing all of your performance testing in the production environment because it is the most complex environment, obscuring bottlenecks and making them more difficult to pinpoint and resolve. I recommend starting in the most simple, limited, isolated environment possible and adding components only when the performance of the previous configuration is at least understood. This progressive approach saves a fantastic amount of time and effort every time you detect a performance issue.

Now, all this is not to say that I am against testing in the production environment, or an environment that mirrors production. What I am saying, however, is that should account for something like 10% of your total test time, which often means that it can actually be done in production during scheduled maintenance periods, thus saving the money of building a test environment that mirrors production and mitigating the risk of performance testing in production.

The fact is that extrapolation, no matter how scientific or accurate in a controlled, sterile environment, cannot account for someone installing a hard drive with a slower seek speed than planned for in the model, or for a mouse-chewed network cable, or for that one configuration setting that didn't get changed to allow the application to make use of the hyper-threading technology on the new hardware that the model had accounted for. Even Connie Smith and Daniel Menasce caution that their highly scientific and precise techniques are only as accurate as their models and that their predictions need to be validated via simulation before being viewed as anything other than best case scenarios.

Dig Deeper on Topics Archive

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Good tips. I had never thought of testing in production during scheduled maintenance times. As I think more about that, I don't think that it would work for us in most instances. Nevertheless, it's definitely something I'll keep in mind so that I can try to identify opportunities where it could help us.
"All models are wrong, some models are useful". In the same way, a replica of an environment will still be not an exact copy because you don't replicate the world. What's important is too simulate risk areas.
Also, instead of extrapolating from test environment why not use monitoring in production to receive the actual usage statistics? THEN you can extrapolate it towards nearing thresholds, and, if your test environment is not good enough to simulate such conditions - you have a back up with production data.