As mobile devices become prevalent in the consumer and enterprise space testing organizations are being asked to...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
take on the mobile device testing challenge. Managing a mobile device testing program brings a unique set of challenges. We will attempt to identify these challenges and present approaches to addressing or at least mitigating the impact these challenges present to your testing organization. The most significant challenges are:
- Number of mobile operating systems (OS) and OS versions.
- Number of mobile devices and device configurations.
- Dependencies on local in-network service providers.
- Enterprise level security concerns.
- Deployment velocity (applications, OS’s, and devices).
While these challenges may seem overwhelming, there are steps a test manager can take to mitigate their impact. Often this will require a conversation with your enterprise partners and a pragmatic re-assessment of how you engage and consume your testing assets.
Number of operating systems: Set scope limits
The number of OS’s and OS versions continues to grow. As a test manager you should lead a conversation within your organization and across your enterprise on which OS’s truly impact your user base and which versions of these OS’s are being used. This really becomes a risk/benefit analysis, a simple set of questions should provide the data required to make an informed decision:
- “What OS’s are being used?”
- What percentage of the user base is using each OS?
- “What OS versions?”
- What percentage of the user base is using each OS version?
Focus your testing efforts on OS’s and OS versions that represent the bulk of your user community. Even more important, formulate a policy on OS and OS testing scope based on current percentage of your user base. As an example: “Any OS or OS version that represents less than 5% of our user base shall be considered out-of-scope.” If these hard choices are not made the testing challenge will continue to grow until the overall risk to the user and your enterprise becomes unacceptable.
Number of mobile devices and device configurations: Set scope limits
The number and variety of mobile devices continues to grow. This presents the same type of challenge as the “number of operating systems” challenge presented earlier – with the additional challenge of obtaining, securing, and managing these devices. Once again, as the test manager you should lead a conversation within your organization and across your enterprise on which devices impact your user base and how much you are willing to invest in these devices. Your risk/benefit analysis breaks down to:
- “What devices are being used?”
- What percentage of the user base does this represent?
- “How much do these devices cost?”
- How much are we willing to invest for a device?
Formulate a policy on devices and device testing scope based on current percentage of your user base. As an example: “Any device that represents less than 5% of our user base shall be considered out-of-scope.” You may also need to formulate a policy that addresses the actual cost of these devices.
Dependencies on local service providers: Identify and leverage 3rd party testing
One of the constraints around mobile applications is that they often need to be tested within the context of the user’s service providers. This means that test managers are often required to use on-shore resources to perform manual testing. These on-shore resources are expensive, difficult to retain, and often difficult to obtain. As the test manager you will have to constantly be right-sizing your team, seeking testing efficiencies (perhaps by introducing automation), and determining if some of your testing activities can be push to a near-shore or off-shore partner.
Enterprise level security concerns: When appropriate, challenge misconceptions
Enterprise level security concerns around mobile devices are still evolving. While there are certainly fundamental security issues in the mobile space, many of these policies could arguably be deemed insufficient, inappropriate, or simply outmoded. The test manager needs to determine which security protocols are impacting the overall testing investment, whether the security protocols really present a significant security risk, and if the test manager should challenge these security protocols because they represent a significant impact to the testing investment. One common security protocol is “thou shall not jailbreak” a smart phone to enable testing tools – but if you look at the users that actually leverage smart phone applications, a large percentage actually jailbreak their own phones.
Deployment velocity: Right-size the team, set appropriate test targets, implement test automation
Test managers in the smart phone testing space are constantly being challenged by the deployment velocity of applications, devices, and OS’s. In these circumstances, a test manager would normally look to enlarge their team, introduce test automation, reduce scope, or some combination of all three. We have already seen several ways to reduce scope based on a simple pragmatic approach based on user exposure. Enlarging your team is an option if you have the budget and executive backing to engage new testing staff. Finally you can explore available smart phone test automation solutions - in terms of a “smart phone test automation program,” the target needs to be high-yield elements of your application space that the current set of automation tools can address.
Smart phone testing challenges may seem overwhelming but the test manager can break the challenge down into a set of smaller challenges. These smaller challenges, as we illustrated above, can be addressed given an appropriate relationship with your enterprise partners and a rational fact-based assessment of the risks. Select sustainable testing approaches that mitigate these risks without incurring unreasonable costs.
About the author:
David W Johnson “DJ”, A Senior Test Architect with over 25 years of experience in Information Technology, across several business verticals, having played key roles in business analysis, software design, software development, testing, disaster recovery and post implementation support. Over the past 20 years developed specific expertise in testing and leading QA/Test team transformations – Delivered Test: Architectures, Strategies, Plans, Management, Functional Automation, Performance Automation, Mentoring Programs and Organizational Assessments.