Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Mobile application testing: Cost effective strategies

This second installment on mobile app testing will outline an initial cost-effective strategy for shrinking the problem space and establishing feedback mechanisms to change scope over time.

A budget-friendly five-point strategy to shrink the mobile problem space

When a large company takes on an initiative to deliver its first mobile app to its users in the field, the testing problem space can get large in a hurry. There are also testing consideration for the infrastructure supporting the mobile app. How do you manage this much testing demand with limited resources? This three-part series outlines the mountain of testing that was requested and the cost-effective, flexible strategy that was developed in the face of constrained resources.

In the first installment, we considered the problem space. This second installment will outline an initial cost-effective strategy for shrinking the problem space and establishing feedback mechanisms to change scope over time. The third part will build out a system to monitor test triggers for the application and its supporting systems.

In Part 1 we looked at the problem space of supporting "all mobile devices" for any credentialed user that wanted to access the company's nifty new Web application, even though it was initially designed and tested on just one platform. (The budget, it turns out, was designed for a single platform as well.)

Testing "all mobile devices" was a non-starter. But we're going to test as much as we can with the time and resources we do have. Our challenge was to draw a box around what to test and when to test it with a focus on keeping the size of the effort as small as possible. The question was if there was a way to do that while still delivering value.

To reduce the problem space to a manageable size and give ourselves room to respond to use in the field, we developed a five point strategy based on these principles:

  • A "Consumerized" IT model
  • Designations for "Certified" and "Supported" environments
  • Testing scope that starts small (only growing as needed over time)
  • Develop a list of specific thresholds and events that trigger testing
  • A quarterly review process to adjust the strategy as needed

Let's look at each of these in turn.

A "Consumerized" IT model

While discussing the problem with co-worker Mike Kelly, he thought one good way to set a boundary on the problem space would be to establish what he labeled a "consumerized" model. The model called for eliminating combinatorial complexity in favor of testing latest-version combinations only.

For example, the Web application would be tested when a new version of iOS or iPad hardware was released. When a new version of the Web app's code was released, it would be tested on the latest hardware and OS. The new code would not be tested on prior versions of the iPad hardware or OS. This approach shrinks the problem space considerably by removing combinatorial complexity.

Designations for "certified" and "supported" environments

We still needed to somehow allow for the untested combinations, however. We didn't want to ignore issues reported in the field because they happened in a device/OS combination we hadn't tested, particularly when the initial mandate was to support all mobile devices.

Here we decided to incorporate a concept into our strategy that I'd cribbed from my brother Jim Grey (a test manager himself): the concept of "certified" and "supported" combinations. Simply put, things we'd tested would be considered certified, while everything else would be supported. "Supported" in our context would mean that we'll log issues people might report from the field from non-certified combinations or devices. Issues would be put into backlog of fixes and enhancement requests, and given consideration as scope was created for each release.

Note that designating untested combinations as still supported accomplishes two things:

  1. It keeps users from being totally out in the cold if they happen to be running a non-certified environment. While not guaranteeing issue resolution, it still gives them a chance to get help.
  2. Most importantly, it gives us feedback on actual usage in the live environment that might otherwise be lost. If there were a groundswell of issues reported from a combination we hadn't tested, or even a platform that wasn't tested (e.g. a tablet from a different manufacturer), that would be feedback that we could incorporate into decisions to change or grow the list of tested devices and combinations.

As mentioned in Part 1, this was supportable in the context of our project and the Web application in particular. See that article's sidebar, "Risk and Mission Criticality When Setting Device Scope," for more.

Testing scope that starts small (only growing as needed over time)

For our initial project, the only devices and OSs in scope were the latest versions of iOS for iPad and iPhone. It was a reasonable starting point on our environment, and it met the requirement of getting a testing model set up in an inexpensive and reasonably quick fashion.

It would not end there, however. To continue the concept of feedback, we planned on other feedback mechanisms to let us know when we needed to grow the scope of testing. One proactive measure was reviewing Web server logs to determine what devices were actually using the site, and in what numbers. This would give us a more complete picture by showing us device usage trends as well as a measure of success with non-certified combinations.

For example, if we saw heavy usage from a device we hadn't certified but had no complaints logged against the device, that would indicate we already had reasonable compatibility (or perhaps incredibly fault-tolerant users). This gives us data to consider when reconsidering the strategy: device usage statistics and issues reported could be viewed together to make testing scope decisions.

Develop a list of specific thresholds and events that trigger testing

Instead of making the test process one of ad-hoc requests, the process would be managed by establishing predefined thresholds that would trigger a test cycle. We created a risk-adjusted list to start, and would adapt based on feedback over time.

How was the list risk-adjusted? One example was the decision to not test every new OS release for iOS devices. Heuristically we felt that only full releases (1.x to 2.0) and "dot" releases (1.1 to 1.2) probably presented enough risk to merit a test cycle. We removed "dot-dot" releases (e.g., an update from 1.1.1 to 1.1.2) from scope to start. With feedback mechanisms in place, we were prepared to adjust that standard over time as we accumulated data. This same approach of limiting the changes and updates that would trigger testing was applied across the scope of the Web application, devices, and supporting systems.

This is a cost-effective approach in two ways.  First, it saves the cost of reflexively running a test cycle for every release without understanding the value delivered. Secondly, it eliminates the time needed to investigate and perform a risk analysis as each small release comes out. (There will be more discussion around triggers and how they work in Part 3.)

Quarterly review process to adjust the strategy

We deliberately started with a very lean strategy. To truly be effective at delivering a reliable solution, and to not get caught by the opposing traps of management-by-crisis or perma-strategy, we built a quarterly review process into the strategy. All of the feedback from mechanisms described earlier can be viewed holistically with key stakeholders present. The business can make decisions about the mix of testing and resource usage, adjusting testing scope and managing costs based on current conditions.

Our five point high-level strategy has started to turn our insurmountable testing-on-a-budget problem into something much more manageable, but we've only looked at mobile devices and their operating systems. We still have to consider supporting systems for the application itself and build out a system to monitor test triggers. We'll take a closer look at those in part three.

 

This was last published in September 2011

Dig Deeper on Software Usability Testing and User Acceptance

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchMicroservices

TheServerSide.com

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

DevOpsAgenda

Close