When used correctly, automated software testing tools can help mobile application development teams speed up testing cycles and potentially increase software quality by reducing the chance of human
Two veteran software testers face off on the pros and cons of maximizing test automation for mobile application testing efforts.
Plan. Plan. Plan your tests.
Jean Ann Harrison,
software testing and services consultant, Project Realms Inc.
Automating as much testing as possible is a priority for Okta, an identity management company based in San Francisco that provides an enterprise-grade single-sign-on service. "At Okta, we're dedicated to automation in testing, deployment, operations and every aspect of our application lifecycle," said Denali Lumma, Okta's technical lead and a veteran software tester. Citing the need for high scalability and availability, single sign-on cannot be built and tested in a manual way, she said. There are too many tests involved and not enough time to run them all, she added. "We focus most of our efforts on automation and almost none on manual testing."
Not all mobile application testers are as dedicated to mobile software test automation as Lumma and her team, though. Jean Ann Harrison, a software testing and services consultant at Project Realms in Bethel, Minn., believes automated testing tools are sometimes overused. "People think that automation can handle anything, but it can't," she said. "People think you can do 90% automation, but that's ridiculous." Talking about functional testing, Harrison said her own experience puts automated tests at about 20% on mobile projects. If testing tools and automation techniques continue to improve, she said, she could see automation reaching the 40% mark, but 50% would be a bit of a reach.
In Harrison's view, many forms of testing are still much more exploratory on mobile devices than they are with Web or desktop applications. "We're not in our infancy with mobile," she said, "But we are still toddlers." There are going to be a lot of changes as the field of mobile application development matures.
Lumma's team takes a heavily automated testing approach
"Our website is critical, but we also have native clients that run on any device that you can run," Lumma said. "If we don't have it yet, we will soon." The website is well covered and the testing there is mature, but they're still building out a lot of new tests with some of the native mobile platforms, she said. To support the push to be there for mobile devices as well as for Web and desktop applications, her team depends heavily on mobile software test automation tools, she added.
Okta went with an automated testing platform from open source startup Sauce Labs. The reason they invested in an automation tool was to reduce how much time the developers spend debugging their code, Lumma said. "Our developers' time is the most expensive cost for us," she said. Features like video logging, screenshots, breakpoints and the ability to interact with the browser brought the developers' debugging time down from three to five days to three to five hours, she added.
No one tool can ever cover everyone's needs, though. Sauce Labs has obviously focused on the Apple platform and its Windows device support lags, Lumma said. "Automated testing of Safari on Mac was a huge win for us. But with Windows testing, we have a real need to support specific OS versions and service pack combos." SauceLabs may grow into it, but right now its mobile software test automation service doesn't really offer much for testing native code on Windows devices. For that, she said, her team uses a separate cloud-based development and testing tool called Skytap.
Now that their testing is mostly automated, some of the focus has shifted from new tests to the existing regression tests, Lumma said. "Test maintenance is an increasing cost over time as applications grow and change," she said, "so it's the biggest area in terms of challenge and reward."
Harrison says mobile testers should plan first, automate later
In Harrison's experience, mobile application development projects come with frequent periods of drastic change. While most changes come in a small, iterative fashion, the total change from version 2.0 to version 3.0 is generally quite dramatic, she said. While desktops are relatively stable and have a great need for long-term regression test automation, mobile applications can undergo sweeping changes and may even restart the codebase from scratch, she said. When the codebase does change dramatically, few if any automated regression tests can be kept and run on the new application, she added.
Harrison is also concerned about the interdependencies of functions within mobile devices. Sometimes the change that breaks your code isn't necessarily a part of the application itself. For example, the notification capabilities of some phones are closely tied to their operating system. If the developers need to add notification features to an existing application, that could introduce the need for the QA team to test mobile operating system functions that were never part of the mobile application before. "That would change the way you test," she said, "and completely change your automated tests as well."
In Harrison's view, these changes are important to consider because speed is a major factor with mobile applications. "If you can test it manually faster than you can write and run the script," she said, "you have to ask if it's worth it to go through the process of automating."
Harrison said she is often asked what the best mobile software test automation tool set is. Her answer is that it depends on what the team in question is going to test, why they're testing it, and what sort of tests they'll be running. A solid testing plan seems to be the only testing tool she would recommend across the board. "Plan. Plan. Plan your tests," she emphasizes.