The mobile problem space is budget-unfriendly
When a large company takes on an initiative to deliver its first mobile app to its users in the field, the testing problem space can get large in a hurry. There are also testing considerations for the infrastructure supporting the mobile app. How do you manage this much testing demand with limited resources? This three-part series outlines the mountain of testing that was requested and the cost-effective, flexible strategy that was developed in the face of constrained resources.
In this first installment, we consider the problem space. The second and third installments will outline an initial cost-effective strategy and build out a system to monitor test triggers for the application and its supporting systems.
Tablet computing is getting a lot of ink these days, and businesses everywhere are exploring ways to get value from the platform. One recent client took the plunge and developed a Web application specifically for Apple's iPad, outfitting field reps with the device to take advantage of it. The process lift to the client-facing employees was considerable.
In fact, the initial launch may have proved to be a little too successful. Seeing the benefit the Web application provided, an executive decision was made to authorize access from all mobile devices. A request came down to the test organization to make sure the web app would work on all mobile platforms so that company employees could access it even if they weren't in the group that received an iPad.
Stop me if you've heard this one, but this mandate had not been factored into the budget and resources for the original tablet project. Development and testing on other mobile devices and platforms had not been scoped or planned. Though not part of the initial application build and test cycle, we were tasked with coming up with a plan to figure out how to test and support "all mobile devices" with one eye on costs.
Risk in the mobile space
"All mobile devices" covers a lot of territory in the mobile space today. While a relatively small number of brands are at the top of the list when it comes to mobile web, the number of mobile devices that are web-enabled is huge and grows constantly.
Based on anecdotal client usage and the relative newness of the decree to support all mobile devices, we quickly determined that "all" in our context meant Android phones, iPhone, and possibly BlackBerry to start. The client maintained a list of authorized BlackBerry models and versions that could be ordered through the company for employee use in the field, which offered the chance to make our research easier. The good news was short-lived, though. Even covering "only" Android, iPhone, and a subset of BlackBerrys still created a huge problem space with plenty of risks. A brief list includes:
- Different physical devices: BlackBerry and Android have numerous manufacturers and different physical hardware
- User input: Touch screen or touch-pad? Virtual keyboard or physical keyboard?
- Screen size and resolution: This varies widely across devices. Will it impact the usability of the web application?
- Frequency of updates: On one side is Apple's roughly-annual hardware refreshes, on the other is Android and RIM, with hardware releases throughout the year.
- WiFi: Support for WiFi is common on newer devices, but some of your users will be hanging on to older devices. Are slower, cellular speeds supported?
- Age: Where do you draw the line on support for older devices still in circulation?
- OS: Device operating systems vary widely: different cell carriers get different version of essentially the same operating system, tweaked according to the needs or policies of the carrier, and released on varying schedules. And how many OSs into the past will need to be tested (full and minor releases)?
- OS/Device compatibility: Eventually, a device can't support the latest shipping version of an OS. But while they still do, how do you deal with testing each device with every possible supported OS version each test cycle?
- Browsers: Even if you only consider the native browsers, there are differences in the way they render pages. The risk increases if you include browsers that can be added to the device by the user.
- Frequency of updates: Updates every month or two are not uncommon on iPhone and Android.
- Multi-tasking: How does the application or web app respond if suspended?
- Interrupts: How does the application or web app respond to being suspended or closed when a phone call comes in? Or when the OS powers itself off when the battery is too low? How about a manual power-off or restart? How about modal dialog interrupts from other system processes or applications? What if a background process or application temporarily hangs the system?
- Emulators: Are emulators "real enough" for testing purposes to eliminate some physical devices? Or do they make the testing space bigger by requiring some testing duplication to verify behaviors?
- Speed: If WiFi is unavailable, is the cellular network fast enough to move files or data of the size you're considering? Do you even support this?
- Network coverage: What happens when coverage switches to a slower, older network mid-process? Or drops completely?
Wow. It just screams testing-on-a-budget, doesn't it?
Even with our scenario's limited "all" of iOS, Android, and possibly RIM devices, the combinatorial explosiveness of devices, OSs, browsers, etc. makes anything like "full" coverage an almost insurmountable challenge. Just building a lab with all devices (or emulators, where reasonable to do so) and maintaining it would be beyond limited resources.
In Part 2 we'll develop a strategy for reducing the problem space to a manageable economical size and give ourselves room to respond to use in the field.
Dig Deeper on Software Usability Testing and User Acceptance