This article is part of an Essential Guide, our editor-selected collection of our best articles, videos and other content on this topic. Explore more in this guide:
1. - Mobile software testing and requirements require planning: Read more in this section
- QA testing: First mobile steps
- Requirements specs for mobile apps
- Fixing broken processes for mobile apps
- Why code signing matters for mobile developers
- Testing mobile app behavior
- How does mobile app testing differ from software testing?
Explore other sections in this guide:
- 2. - Building security into mobile connections
- 3. - Creating usable mobile app dev lifecycles through teamwork
It's happens all the time with mobile apps. One minute they're up and the next, they're down.
As testers, we can't prevent the Wi-Fi connections our apps depend on from going down. But we can take steps to define how the app should behave when the connection gets flaky, or drops off altogether.
What happens if the mobile user is in a coffee shop and steps away? What if the user is on a 3G network and switches cell towers, goes roaming or gets out of range? Answering these questions turns out to be non-trivial.
That is exactly what I intend to do in this article.
If I move my device beyond the coffee shop's Wi-Fi connection just as I click the submit button, it is possible the message I'm sending will never get to the server (and my book order, for example, will not go through). If the message does get to the server, it's possible I won't get a response.
Most of the time, our on-site testing efforts assume that the Wi-Fi connection is up. And that assumption leads to the first problem that needs to be solved: Your team may not have defined what the software should do when the connection is goes down.
Step one: Just test it
Last week, I put an Android phone in one hand, an iPod in the other, and I literally walked around the building, looking for a wireless dead spot. We needed the connection to die completely, with no partial signal, so I picked an iPod on a Wi-Fi connection, not an iPhone. Eventually I found that the elevator made a decent Faraday cage, at least when the doors were closed. My test plan went like this: I opened the application, walked into the elevator, rode it to the basement, then reclosed the doors and tried to use the application. The new feature I was testing was a special offline mode. Designed to recognize when the application loses a connection, it collects server requests, resending them when the connection comes back. (The application has significant offline functionality.)
Step two: Simulate a weak network
In addition finding out what happened the app lost Wi-Fi connectivity, I wanted to test low-bandwidth and dropped packets situations. My first experiment was on a Macintosh desktop, limiting the bandwidth using a built-in operating system firewall called IPFW; it turns out you can use it reduce bandwidth and drop packets. From there, I used the iPhone simulator that comes with XCode, Apple's free developer toolset. (Simulators for the BlackBerry, Android and other devices are only a Google search away.)
Testing from a desktop provides a fair bit of power, but nothing beats a real device. So I plugged the iPhone in while running XCode; XCode immediately popped up the organizer window and asked if I wanted to install XCode developer tools. Once I clicked yes and went into settings for the iPhone, I saw a new option called Developer, with a sub-option of "network link connection."
Most of our testing was under the Very Bad Network setting, where our application did surprisingly well. Note: These tools require iOS 6 and OSX 10.8, the newest Apple operating system versions.
Before this work, the team simulated network loss through switching on airplane mode, which, it turns out, sends an entirely different set of codes to the browser to indicate that the connection is down. If your team hasn't tested how the app performs in airplane mode, it's a simple, cheap tester -- and it might be insightful.
Step three: Talk to management
The result of all this testing was mixed news about the status of the software. Instead of filing a lot of bugs that might not ever get fixed (or that executives might decide are expected behavior), we scheduled a meeting with management to talk about the status. I brought my bullet-pointed list of issues, which was modest. The conversation brought up a new question: Did these changes improve offline behavior at all from the previous build?
Within a few hours of testing production to the test environment, side by side (and a lot of time in the elevator), we had an answer.
My point here is that setting expectations about the application performance under network stress is multidimensional. A right answer probably involves tradeoffs in time, money, architecture and risk. For example, in our testing the Chrome browser recovered when the network was up and down. Instead of making massive investments in infrastructure, we decided to offer some advice: If your network is a problem on the desktop, consider the Chrome browser.
Instead of requirements, we started with "desirements" and a conversation. That's a perfectly valid way to make an investment decision -- and to hit deadlines as well.
Step four: Miles to go before we sleep
There's plenty more to talk about with regards to connectivity; here are a few more tests to try. First, try using the application continuously (also taking long pauses) while a vehicle is moving, which will force a conversion from cell towers. For a lightweight approach, move between wireless access points in a large building.
Another test is to use a network connection that goes through a proxy -- that is, create a situation where, when the Internet connection goes down, your device still connects to the proxy (which is essentially a bridge to the Internet).In that case, the application may start to receive 500-series error messages, which are different from timeouts and regular failures.
Finally, when the programmers ask for network status from the browser (and they will), you can get that information from both an iOS 6 device as well as Android; the Android debugger even works on Windows. With these tools you can get a quick visual on what requests timed out, which got an error and what the error code was. For still more ideas, you might consider getting a copy of Jon Kohls Tap into Mobile Application Testing, which has an entire chapter on connectivity.
The challenge with mobile app connectivity, as you can probably tell by now, is not in finding test ideas, but in knowing what tests run first, when to stop and how to decide whether good enough is good enough.