Change is a constant in Web and mobile operating systems, and that poses ongoing problem for quality assurance (QA) managers and analysts like myself. We always has to be proactive, thinking ahead when testing instead of reactive, to ensure that our Web and mobile applications can can go live on their target operating systems on the intended release dates.
More and more corporate business and mobile services are dependent upon Web- and mobile-based operating systems. Meanwhile, the constant evolution of mobile and computer technologies call for constant unavoidable upgrades to operating systems, Web browsers and application interfaces, all referred to in this article as "clients."
QA analysts who work at companies with very complex and dynamic client-dependent systems must be armed and ready to test these changing environments in beta prior to their official launches. More often than not, I've found, business executives are not ready and/or willing to take the appropriate steps needed to manage such change.
There are a slew of Web and mobile operating systems out there -- Apple OSX, Linux, the upcoming Google Chrome, etc. – and none of them work exactly like the other. Companies need to utilize multiple test environments to ensure that their applications will work on upgraded devices and on diverse browser-based interfaces. Testing should be conducted against both 32-bit and 64-bit processors and against the performance of both memory and its process speed and storage capabilities.
User-environments encompass a dynamic grid for test specifications, thereby requiring the tester to meet criteria of a seemingly endless number of permutations. For this reason, investing in a solid virtual environment or a server with multiple partitions or drives that are hot-swappable may be a viable solution for some businesses.
Having a test environment within easy reach helps QA analysts and QA management stay a step ahead by being able to anticipate future problems -- based on what the test specifications are -- and mock-test them before they happen. This reduces the risk of having the end-user exposed to faulty hardware or software. The key is to always conduct more testing than is required; in more environments than required.
Customers want the comfort and safety of knowing that they can choose any environment at any time and the system will work. It is often a good practice to create an environment requirements matrix with the operating systems (column) and browsers (row) listed. Plug in "X's" and color-code the matrix to reflect high- and low-impact test environments.Example of a Test Environment Matrix.
It also makes sense to work closely with the R&D team at the operating systems' parent company, keeping in regular contact whenever possible. Doing so will ease the process of testing the applications you intend to run on their system and help you become more knowledgeable about the changes your company needs to make in order to comply with the upgraded system's requirements; thereby reducing your share of risk associated with the change.
If your company has a database containing user-growth and client activity statistics, another good practice is carefully reviewing it prior to testing. This gives you an idea of the efficacy of the current system compared to the new one. Testing not only functionality but performance and security are other key components.
The extra work involved with being prepared to deal with change will pay off more often than not. The customer ultimately expects your software and/or services to provide them with the proper information and accuracy needed to run a business. Why lose customers over an issue that could have been avoided by being a little more comprehensive in testing the changing environments for operating systems, Web browsers and client applications with which your product interfaces?
About the author: John Scarpino is director of quality assurance and a university instructor in Pittsburgh. You may contact him at Scarpino@RMU.edu.
This was first published in August 2009