Fotolia

Tip

Learn what to test in a mobile application

A lot goes into making sure an end user can interact with a mobile app seamlessly. Here are five ways to make sure a mobile app functions properly, without code or device challenges.

Users expect more from their mobile applications than they did in the past. Mobile apps are the preferred interface for a broad user base, and businesses are investing in them accordingly.

The default test mode for mobile apps has moved from browser to native, and back ends have shifted to microservices. Cellular network coverage has expanded, and there are diverse tools to compile and deliver the application to both Android and iOS operating systems. Even the ways in which applications break have changed through generations of mobile computing. And when types of defects change, it's important to reconsider what to test in a mobile application.

In some respects, feature testing is the same on mobile applications as it is for those on a desktop. Testers create a list of functions, weave them into realistic scenarios, execute them and report results. But mobile applications are interconnected and prone to failure in unique ways.

Testers must perform effective QA while navigating mobile applications' unique technological and usability factors. Learn what to test in a mobile application, how to monitor and respond to defects, how to choose which tests to script and more.

1. Test component reliability

Mobile apps don't exist in closed ecosystems. An app, for example, will call microservices that validate login credentials. Then, those microservices pass the request on to a subsystem, such as catalog, inventory or checkout. QA professionals can vet each request independently with unit tests or service-level tests. If individual dependencies are slow or unreliable, the system will be too. So, first, check the components to make them reliable.

To ensure component reliability, turn to traditional software development methods like code coverage and inspection. Put together a map that illustrates who owns each piece of an application, to diagnose and debug problems more easily.

Sometimes a development team believes the software behaves correctly, but another team says it is wrong. Contract testing, or consumer-driven microservices testing, solves this problem; the method uses the way a system calls a service to create tests for the provider. Contract tests can ensure that the function signatures match. When these tests run continuously on a CI/CD loop, it catches problems to the point where it's difficult for a changed microservice to break a build in production. Versioning microservices is another way to reduce this risk.

After you improve the reliability of each component, test the device as a system.

2. Mobile failure modes

Mobile devices present different issues than desktop computers and laptops. For example, tilting a mobile device could cause the app to render in landscape form and look odd -- this won't happen on a laptop. A user can lose network connection briefly, which causes state problems. And, in some cases, notifications from other applications can interrupt the system. Anyone on a mobile device could experience these issues during everyday use.

These problems might be impossible to simulate with a test automation tool. Automated mobile test scripts don't offer enough value to justify the time necessary to write them for every possible condition. Testers can be more successful if they follow the 80/20 rule: Assume 80% of failed tests stem from 20% of test cases. When these test scripts break, something is likely broken with the application.

Check for these kinds of issues when the team rewrites the UI, or brings in a new GUI library or component. Test the software as a system when it first comes together, and before major releases under challenging conditions.

The first few times QA professionals field test an app -- i.e., take a mobile device on a long car ride, or swap between cellular data and Wi-Fi -- it might take a few days. After those first few all-out explorations of usability and functionality, however, testing is more like a maintenance effort. This type of failure mode testing can occur during regular shakedown or regression testing. Mature teams don't need shakedown or regression testing, and can look for ways to test without it all the time, perhaps with canary testing or feature flags.

Canary tests deploy an update to a small segment of users to see how it performs in real conditions. Feature flags are written into the software code, enabling teams to turn a feature on or off as needed. However, both canary testing and feature flags present challenges. For example, the app change must deploy to a large enough segment of the user population to generate meaningful real-world results. Those users must also provide feedback for the team to act on. Otherwise, canary testing and feature flags just delay full production.

3. App and device simulation

Once you identify what to test in a mobile application or device, assemble the tools to help you do so. Computer assistance can help testers mimic hard-to-simulate processes for mobile app testing. Tools, for example, can simulate a slow network, the device being in a different location for cell service, the injection of a different value into the GPS or packet loss on the network.

Web browsers Chrome and Firefox both have mobile layout simulators that testers can use to look at an iPhone simulation on their desktop. Real iPhone and Android emulators also exist, along with tools that drive the UI of an emulation, generally through code.

4. Continuous and synthetic monitoring

Defects do make it to production. Mean time to identify and mean time to recover are two temporal metrics that reflect how long it takes a team to respond to a defect. Conventionally, these KPIs are used to measure how well IT operations handles tasks in production, but mobile app teams should respond to defects in a DevOps fashion, where programmers, architects and testers collaborate to fix them.

The DevOps way to respond to defects can involve continuous monitoring with reporting and alerts on errors in the system. Another approach is to reuse the simulators that testers used during development now to run synthetic transactions, which means actual end-to-end user-like actions in production, such as log in, search, add to cart and checkout.

When a synthetic transaction fails, the tool blocks that activity for a portion of the user base, and notifies the development team about the issue. Track the duration of these synthetic transactions on a line graph to prevent problems.

Dig Deeper on Software testing tools and techniques

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close