Should any testing be performed post-production? If so, which types of tests and what are the risks?
Like many questions, the answer to this begins with, “It depends.” For some applications and some instances, it makes little sense to perform post production testing. In some instances, it may not be desirable. And in some circumstances it may make perfect sense to do testing after the application moves into production.
Let’s consider some of those possibilities.
Applications that run in very specific environments may run on emulators or simulators, or interface with emulators or simulators – except they are not really exercising the actual application. They are relying on emulators. You won’t know how they actually work until they run on, or interface with, the actual devices or environments.
Similar to this, unless you can execute performance tests on the actual devices, with the actual configuration and actual users (not virtual users or emulators) the reasonable best you can do with performance testing is to establish models or baselines for performance. You can establish some form of reasonable expectations. You can establish some model of performance in given situations, but can you really give solid measurement values around what the system does, as opposed to predictive models?
Why do I say this? In both scenarios, the likelihood of doing anything more than simply scratching the surface, even in a rigorous test effort, is pretty slim. We can give a solid effort, but that will not do much more than show if there are any problems that will result in errors immediately.
As time goes on, I find myself leaning, in situations as described above, toward accepting that some things simply cannot have a meaningful level of testing in a test lab. This is part of the draw of crowdsourcing as a testing option. Still, if your applications do not lend themselves even to that, there is little choice but to “test in production.”
What the idea of “test in production” means in many installations is to migrate the changes to the production environment, and watch logs and performance monitors looking for spikes and unusual activities. For me, this is the most efficient way to “test in production” – carefully watching the system’s logs and performance monitors while the system is in use.
Dig Deeper on Topics Archive
Related Q&A from Peter Walen
Software testing veteran Peter Walen explains how software testers can write test scripts that others can follow without having to test by rote. Continue Reading
Defect tracking can be tedious, yet comparing tracked defects can also help testers improve their work. Expert Pete Walen explains how the reasons ... Continue Reading
While many organizations may look for simple ways to measure progress, it is important for project managers to fully interpret and understand test ... Continue Reading