Could you please tell me how to measure the performance test environment to the production environment? I mean scaling performance environment to the production environment? What factors should we have in mind for the performance environment?
It can be both inaccurate and dangerous to compare performance results obtained in the test environment to the production environment. The two most likely differences in the environments are system architecture and volume of data. Other differences might include: class of machines (app and Web servers), load balancers, report servers, and network configurations. This is why making a comparison can be inaccurate. It can be dangerous if production planning is made from performance transaction timings obtained in an environment that might be very different.
Review the system diagram for both environments and see if there are additional differences you can identify. Be clear to communicate these differences if anyone suggests using results from the test environment to imply the performance timing results would be the same in production.
I advocate testing in production whenever possible. In order to execute performance tests in production, I've typically worked in the middle of the night -- from 2am to 5am, for example -- while a production outage is taken. I've worked middle of the night on holiday weekends in order to gain test time in production as well. If you can't execute tests in production and you are left to execute performance tests in the test environment, then I recommend learning the performance behavior from your test environment and then communicating test results in terms of performance characteristics versus transaction timings.
Performance characteristics might include knowledge of CPU usage or performance degradations. For instance, you might be able to discover that the report performance exceeds the acceptable range defined when generating a report with some specified amount of data (such as 2 months of accounting numbers). Or you might learn search performance begins to degrade when X number of users are logged into the system and X numbers of users are executing searches at the same time. You can look for high level information and learn overall performance characteristics that can be helpful but won't provide performance timings that should be used to presume production will behave in the same way.
- Performance testing in context
- developing an approach to performance testing
- Entering the realm of performance testing
Dig Deeper on Topics Archive
Related Q&A from Karen N. Johnson
User acceptance testing and system integration testing differ in one key way: the person who does the testing. Learn when to apply UAT vs. SIT. Continue Reading
Initiating test automation on your project team may seem challenging, or even overwhelming. Fortunately, expert Karen Johnson has been through this ... Continue Reading
New mobile phone models enter the market all the time, and it seems daunting to perform application testing on the various devices available. Expert ... Continue Reading