Testing mobile apps: Tips on manual, automated, cloud QA

Testing mobile apps takes a multi-pronged approach. Savvy mobile developers are using cloud tools and automated tests in addition to manual QA.

Mobile application users don’t want to fuss with getting fixes for faulty or slow applications. Instead, they delete and upload a replacement app. Obviously, this common practice in personal use would wreak havoc in enterprise mobile application deployments, which puts software test and quality assurance directors on the spot, according to Eran Kinsbruner, director of product strategy at Perfecto Mobile, a software test ISV.

“The tolerance of the users is way lower than in the desktop era,” said Kinsbruner. “The end-users who adopts mobile applications have high expectations with regards to quality, usability and, most importantly, performance.” 

Eran Kinsbruner

In this Q&A, he discusses trends in mobile application development, such as testing, when manual testing is needed, cloud providers’ mobile test responsibilities and more.

What are the most common mistakes you’ve seen in testing whether an app will perform on real devices in multiple scenarios?

Eran Kinsbruner: Not focusing on poor application performance due to lack of application optimizations, and not taking into account behavior of different devices running on different mobile operating systems (OSes) in different network conditions. [You must be able to] simulate real native apps running on real mobile devices with specific mobile OSes.

Including manual testing scenarios is very important when it comes to mobile application testing strategy.

Eran Kinsbruner, 
Director of product strategy, 
Perfecto Mobile

What situations require manual mobile application performance testing, and why?

Kinsbruner: Including manual testing scenarios and interoperability testing are very important when it comes to mobile application testing strategy. When testing a mobile application, the testing team ought to test the various events which may occur when the application is being executed – Incoming calls, SMSs, low battery, alerts such as emails and roaming. Lately, along with growing technologies, mobile users also take advantage of location based and voice related apps – these rely on much more sophisticated use cases, which can also be tested manually (e.g. Applications like Google maps get input from end users about their locations and use that data for navigation).

What are some guidelines for making the manual-versus-automated test decision?

Kinsbruner: With regards to the differentiation between manual and automated test cases, the rules from the non-mobile space would apply here as well:

  • Automate the most frequently tested cases;
  • Automate cases which have predictable results;
  • For strong return on investment, automate the cases which are easy to automate;
  • Automate the most tedious manual test cases.

To the above, you need to take into account that your automation must run across devices and platforms, so make sure that your scripts are maintainable. Also, in mobile test automation, the solution you choose must support all types of objects.Finally, think about future regression cycles on older devices and mobile OSes as you move forward.

What common performance-related mistakes do organizations make when bringing mobile applications into the enterprise ecosystem?

Kinsbruner: The two big ones are not having rendering performed in the background, and not using application caching.

Are there particular device-related challenges?

Kinsbruner: Note that mobile devices have fundamentally different network performance characteristics. Device-based pitfalls, developers ought to take into account include:

  • CPU and Battery impact mobile performance;
  • Device Memory that repeatedly presents sections of application code will incur heavy memory operations - reducing performance and increasing power consumption, e.g., displaying an image on the screen that is stored on disk;
  • Device form factors and screen sizes can impact application behavior, performance, etc.;
  • JavaScript execution takes longer on mobile devices – Defer parsing until needed by the application;
  • DOM Objects are more complex in mobile; and,
  • Network constraints – Bandwidth on mobile ranges from 3G and slower to 4G and Wi-Fi.

Let’s say a business deploys mobile applications on the cloud or uses a cloud provider’s application? What is the cloud provider's responsibility in reporting on and maintaining mobile application performance?

Kinsbruner: The cloud provider’s responsibility is to enable sufficient coverage for the most critical end user aspects: devices, locations, networks and the top user scenarios. To these aspects, the cloud provider must also provide interoperability capabilities to assure end-user experience is maintained even when calls, SMSs, etc., come in.

Cloud providers give a secure enterprise grade cloud infrastructure and enable performance testing which covers the areas just mentioned. Also, once the customer has the load testing initiated, the cloud provider needs to provide all information and insights such as network traffic, device vitals and more, to allow the customer to perform optimizations and improve application performance. The customer has to define the right key performance indicators for his key transactions and of course the transactions themselves.
 

Is your team moving into mobile applications? Let us know what questions you have and what challenges you face. We're here to find you answers. 

This was first published in August 2013

Dig deeper on Mobile Application Testing Techniques and Tools

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSOA

TheServerSide

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close