QUEST speakers confirm mobile test automation vital for performance

Caroline de Lacvivier

At QUEST conference 2014, three of the speakers could agree on one point: Testing in the mobile space demands a level of complexity, urgency and continued excellence that simply doesn't exist in traditional software testing. "People are becoming harder to please. If the app crashes or is slow, they're going to move on [and] buy a similar app," said Fred Beringer, SOASTA's VP of product management. Couple this finicky user satisfaction with very public user ratings, and the performance bar goes way up for mobile testers. "Expectations are very high and therefore testing is even more important," Beringer explained.

Essentially, you want to automate everything that you can so you compress the whole process.

Fred Beringer,
SOASTA

"People are on the go on their mobile device, and so they become a lot less patient," said Darren Madonick, project manager at Keynote Systems. The challenge doesn't stop there, however. Madonick went on to discuss the intricacies of testing for variable network conditions, diverse devices, ever-upgrading operating systems, ever-shortening release cycles, all while accounting for user behavior and, most of all, user satisfaction. It is a tall order that demands an entirely different approach to software testing.

Michael Yudanin, CEO of Conflair, an Atlanta-based IT solution company, said automation would be a key component to speeding up the testing process without diminishing the reliability of testing. "Automation is very important in the PC space. In the mobile space, it is a must."

All three of these mobile experts hosted mobile testing sessions at QUEST this year. They share some of their session's insights regarding the state of mobile testing. They also offer some advice on how to make test automation a strategy that works for, and not against, the mobile testing team.

Automation: The backbone of mobile testing

Mobile apps are on tight deadlines. They need to be built, validated and launched at warp speed. Beringer uses the term "automating the automation" to describe a testing strategy that is particularly suited for these tight release cycles. "Essentially, you want to automate everything that you can so you compress the whole process. That way, you collect feedback for developers as soon as possible."

Another reason why automation is so crucial in the mobile space is because there are certain app functionalities that are unpredictable and time-consuming to test manually. Take Yudanin's example of a company testing a free application. Free applications financially depend on the advertisements that display periodically. "It was crucial for this company to know how many ads were [displaying] and how many had failed. Now there is no kind of manual test for this. You would need to sit there for 12 or 24 hours to make sure a new ad appears. This is certainly a fine candidate for automation."

That said, there are certain mobile tests that can't -- or at least shouldn't -- be automated. These tests primarily involve usability and behavioral testing, the ones that require real devices and critical thinkers. "You need to have the visual feedback; someone to give an opinion on the actual app. It's very difficult to automate an opinion," Beringer explained.

Yudanin added that these behavioral tests provide some of the strongest insights into software weaknesses. "People test them while they're traveling. People test when they're sitting at a traffic light. They test with one hand. They always come back with some important insights."

Automation advice for mobile testers

According to Beringer, one of the strongest skills a mobile tester can have is the ability to identify the test that would bring the strongest return on investment (ROI), if automated. Conversely, one of the biggest mistakes a mobile testing team can make is rushing into an automation project without enough prior analysis. "They don't do an analysis of everything they need to test and which ones need to be automated first; which ones will bring the best ROI," Beringer said. "They just do automation because everybody's doing it, but they don't really lay down actual objectives they can track. They don't lay down metrics to understand where they are in their automation."

Beringer advised testing teams to understand their goals early on in the process. Teams should also add metrics to those objectives so they can track their progress. For example, if a team's objective was to reduce the number of bugs going into production, track those. "That way, if you're not successful, at least you can step back, understand where you are, and make changes."

View the next item in this Essential Guide: quality assurance (QA) or view the full guide: QUEST 2014: News from the conference

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Expert Discussion

TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest