This content is part of the Essential Guide: STPCon Fall 2013 calls all software test professionals to Phoenix
Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Testing mobile apps: Strategies for monitoring and user emulation

Rebecca Clinard, performance engineer with Mentora Group, offers insights on testing mobile apps using monitoring and user emulation techniques.

In these days when there's a mobile app for everything, performance has taken on a whole new meaning and the demands of speed and functionality are at an all-time high. "With mobile applications, typically you get one chance," explained Rebecca Clinard, performance engineer at Mentora Group, Inc. "The abandonment rate of mobile apps due to poor performance is huge."

Responsibility falls to performance engineers to maintain the quality of experience that users increasingly demand, all while navigating different platforms, rapid device innovation and accelerated time to market. Unlike Web performance, one delay or blip in mobile performance could mean life or death for the application. Users are difficult to satisfy and apps are easy to uninstall.

A longtime performance engineer, Clinard has been in the Web application performance industry since the dot-com boom and is well versed in the challenges facing testers in the mobile space. She will be sharing her expertise at STP Con this year, and her talk will focus on this very issue. She sat down with me to discuss the difficulties of maintaining high performance when it comes to mobile, such as emulating a realistic user, monitoring for bottlenecks and managing scalability issues. More importantly, she offers advice on what to do about them.

Emulating the user

According to Clinard, one way mobile applications differ from Web applications is their specificity of purpose. When an application is created and deployed with a lot of precision, there needs to be more upfront planning in order to express that added complexity. Moreover, user expectations vary from application to application. Therefore, each one will be built and tested differently.

The abandonment rate of mobile apps due to poor performance is huge.

Rebecca Clinard,
Mentora Group

Clinard gave the example of a weather app that pushes information in regular intervals, or a gaming application, in which users need to react quickly. There are also more nuanced scenarios, such as the difference between searching for a service and paying for a service. "The challenge with mobile applications is first to understand the logic, to emulate a realistic user and then build that logic into your script."

The complexity doesn't stop there, however.

The challenge is not only in creating realistic testing scenarios for apps that differ from each other but also to take into account how the user activity fluctuates within the app itself. Take Viggle. Clinard explained that this mobile game application was based around primetime television shows and, therefore, user participation spiked whenever a show premiered, yet another eventuality to consider when creating load tests that match real-world scenarios.

It's not all bad news. For one thing, mobile apps communicate less over the network than Web applications do. This means that, while testing scripts are more complex, they are also shorter. Clinard also pointed out that "[mobile apps] are all extranet so they're easier to test in some ways. You can use the cloud, emulate different geographical locations, emulate different users in different regions, emulate different networks, emulate the bandwidth." In other words, the technologies around mobile apps and testing are simplifying the process.


Clinard believes that monitoring mobile apps is an essential way to comprehensively examine their performance. "You need to monitor continuously, especially during performance testing, so you can isolate trends and differentiate between symptoms versus root causes."

She went on to explain that this was important for ascertaining a system's utilization of resources. By extension, monitoring plays a crucial role in understanding where bottlenecks lie and where scalability issues occur. "You need to be able to monitor the entire infrastructure and look for trends and say, 'OK, I see what happened before this happened.'" It's a way to get the whole story so you're not just treating symptoms but curing the disease too.

One resource that testers use to provide this performance data is real-time user monitoring (RUM ), a technology that can track and analyze mobile app performance in real time. The upside to RUM is that it can report on exactly how long it took to get the full response to a mobile device. The downside is that, in doing so, it adds overhead and slows down response time. However, Clinard believes that this is a small price to pay for vital information. "Even though it's kind of contradictory to add to the response time by adding more bytes, it's getting you information that you absolutely need." Since response time is one of the few variables under an engineer's control, Clinard believes that collecting and analyzing data on that end is primary.

What's next?

Mobile testing tools evolve at the pace of mobile technology -- that is to say, quickly. Clinard predicts that development stacks and frameworks are going to become lighter and more efficient. Meanwhile, as technologies move more toward the cloud, the structure around mobile testing will eventually become obsolete.

Even today, engineers can generate their load from the cloud and the console of whatever tool they use, for instance, JMeter can reside on a BlazeMeter cloud. "So the whole infrastructure now, your software, your load generator, now everything's in the cloud and you're just bringing up a browser, creating new scripts and driving it and that's going to be huge, especially in the services department."

As these tools change and become more efficient, performance engineers will have to modify their skill sets. It would serve engineers to push beyond the task of creating scripts in order to emulate users. While this is valuable expertise, Clinard believes that there are benefits to being able to understand the architecture on a more holistic level.

"Digging into the architecture -- understanding how one transaction might affect another transaction that uses the same resources -- and being able to plan and tune capacity on that end is a skill set that's going to be needed."

Dig Deeper on Topics Archive

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.