Mobile software testing automation tools can do a lot to speed up the quality assurance process. However, mobile testing expert Jean Ann Harrison says mobile application testing requires more than software testing automation tools alone can provide. Several different types of testing that are important for mobile application software quality are ill-suited to automation, in her view. These types of testing include trainability tests, configuration tests, mobile device performance and usability testing. These tests, according to Harrison, are faster, easier and less expensive to run manually than to automate.
In general, the reasons hinge on the number of tests that would have to be written and maintained, as well as on the frequency with which mobile application code bases change. Each time there are significant changes in the code base, she argues, any automated tests would have to be reconsidered and rewritten. In contrast, Harrison suggests that manual testers can handle such changes intuitively, with relatively little difference in the overall script used to test multiple devices.
Trainability and configuration testing are issues that intertwine in mobile devices. Many Web application developers have grown used to overlooking both of these factors, according to Harrison, because the Web browser takes care of them to a certain extent. This also remains true for completely Web-based mobile applications, but for those with native functionality built in, both configuration and trainability concerns bubble up to the surface.
About the expert:
Jean Ann Harrison is a software testing and services consultant at Project Realms Inc., and a partner and senior technical consultant at Perfect Pitch Marketing. She has more than 13 years of experience in the field of software testing and quality assurance. Her focus is on mobile devices, particularly those used in the medical profession; however, Harrison has also worked with multi-tiered system environments, including client/server and Web applications, as well as standalone software applications.
Consider the difference between a tablet and phone app, Harrison suggests. "When we look at Facebook on a tablet and on a mobile phone, side by side, we see two very different user interfaces," Harrison said. The Facebook app not only displays differently on a tablet than it does on a phone, but it has differences in functions such as searching and newsfeed updates. Harrison explains that this difference presents a configuration testing issue because when those configurations behave differently, it becomes necessary to test each configuration separately.
Harrison says it's also a trainability issue, because end users have to be able to operate both versions without confusion. End users interact with different devices at different times and with potentially different expectations. "How do you train someone that this icon on the tablet is the same as that icon on the phone?" she asked. The answer to this question is as important to consider as the need to test separate configurations.
Performance testing on mobile devices is very different from how it is with Web applications, according to Harrison. "With Web apps, performance testing generally involves providing for a number of people who are going to be hitting the website at a given point in time to buy tickets or something like that," she said. "That's not how mobile devices work performance testing." Testing the device performance of native mobile apps has to take into account the hardware limitations of each device the application supports.
For example, most mobile devices have a temperature above which they cease to function. To keep the device from getting so hot that its components melt, an iPhone may simply shut down completely during heavy use -- especially if it's being charged at the same time. Testers have to think about the operating temperature of the device while it's in use because that can be a major concern.
There are times where I run the same test over and over again, and I will always see something different.
Jean Ann Harrison
The final test type Harrison says to avoid automating is usability tests. When testing the look and feel of the application, testers have to consider such things as font size. Text should be large enough that it can easily be read, but still small enough to fit the space available. "You can test that with your eye pretty easily," Harrison said, "but how would you test it with automated scripts?" Again, Harrison said this is a place where software testing automation requires separate scripts: one for the tablet and another for the phone. Increasing the number of scripts makes maintaining your tests more difficult and costly over time.
Harrison suggests reducing test time and complexity by combining many of the above tests in scripts that can quickly be run manually and can test for several aspects of the user interface at the same time. "If you combine your tests," she said, "you shrink your tests out, and what ends up happening is that you can cover more." Combined testing can be done with automated tests; however, designing and maintaining those complex automated scripts can be more difficult than manual, intuitive tests.
Planning, Harrison said, is the most important part. "If you plan it out right, then you're golden. That's the key." While test designers are planning out tests, it is important that they keep their eyes open and do a certain amount of exploratory testing. "There are times where I run the same test over and over again, and I will always see something different. An automated test is not going to capture that," she said.
This doesn't mean that automation has no place in mobile software testing, but it can't hit all the corner cases yet in Harrisons view. These few types of testing we've talked about just aren't a good fit for software testing automation yet, she said. "Keep the automated tests for the rest, but with specific functionality-combination-type tests, that's where you'll need to spend your time in observation and exploratory testing."