Guide to cloud application testing
A comprehensive collection of articles, videos and more, hand-picked by our editors
In order for imo to grow out of its infancy as a purely Web-based messaging application and spread its wings as a new mobile messaging application, the development team had to overcome a lot of challenges. In part one, Erdal Tuleu, director of engineering at imo, discussed the biggest challenges of moving from Web-based to mobile applications. In part two, Tuleau moves deeper into the differences between mobile and Web-based applications and gives advice on testing mobile applications.
What makes developing and testing mobile apps really different from traditional Web applications?
Erdal Tuleu: In the past it wasn't too common to have more than one station. People usually logged into their laptop or desktop and closed it before going home and logging in again there. Now it's fairly common for people to have more than one station. For example, you can have it open on your laptop and again on your phone. I don't think it's that common anymore to sign out and sign back in all the time. People want to always be online and always reachable.
With that came the challenges of maintaining all those stations. How do you know which station you want to deliver the message to? For example, with our very first mobile version, if you were chatting on your laptop, every incoming message would also beep on your phone, which is pretty annoying. You can imagine an office where everyone's phone is beeping all the time. That was one thing we wanted to improve. Now we do a pretty good job at being smart about which station the user is chatting from and where they're active. We are smarter about where we think you want to receive your messages.
After we design and commit a new feature, we'll give it to the employees to test it for a week and give us their feedback before we submit it to the app markets.
director of engineering, imo
We know which client users are sending messages from and also to some extent we can tell which client they are reading messages on. So from that data we have logic on the back end that can determine where the most important place to send a new message will be.
Another thing is that if users are always signed in, we don't want to bug people at night unless it is really important. I think we can still improve this, but we have a restrict mode feature which lets you pick a time -- you might say I sleep from 11:00 p.m. to 6:00 a.m. -- and then it just turns on by itself every night and doesn't make as much noise when someone messages you. But there are also emergencies, so we need a way to be able to break out of that. That's one thing we added to make customers' lives a little bit easier.
What is your testing process like? How do you go about testing mobile applications?
Tuleu: Well, we have a lot of devices in the office and we have people using them all the time. Not just the developers, we have pretty much everyone working at imo actually using imo and testing it, too. A lot of us are running around with a whole bag of mobile devices. Right now, I personally have a Galaxy Nexus, an iPhone, an iPad Mini and a Nexus 7 tablet.
We basically all use iPhones and iPads at home and we have a lot of devices in the office. The iOS tends to be easier to test because there is less variation. For Android, we look at market stats to pick the most popular models and buy them for the office and make sure people use them, too. We also put out a beta version on Android to get more user testing.
But it's not all crowdsourced testing. We also do a lot of code testing before it gets released to any sort of user testing. Sometimes we program in pairs and we always require all commits to be code reviewed. So I think it's a good mix between traditional testing methods and crowdsourced usability and bug testing.
After we design and commit a new feature, we'll give it to the employees to test it for a week and give us their feedback before we submit it to the app markets. For example, when we launched voice calls we had everyone call their families with it. That was good because it covered different carriers and different countries, so we got a lot of great feedback from that testing round.
When we first started testing mobile applications, we were less focused on particular features and more focused on reliability. We needed to make sure the app doesn't lose messages, because that is the most important feature for a messaging application. Initially, we had problems with people losing connections because that wasn't a problem in the Web version. So we knew we had to adjust some things to account for frequent lost connections and for long lapses in a connection like when the user turns on airplane mode.
It's a hard thing to test for, because if you miss a message, you don't always know. It was a really good thing that we had the Web client as well so we could see if there was a message lost there. All the conversations get saved to the user's chat history on our servers, so each client can pull them from there. If a client does go buggy and doesn't receive messages, the messages are still there and the user can get them from a client that's working right.
On the client side, if you send a message while you're in a tunnel, for instance, we have the client hold onto that message and try to deliver it once you reconnect. That was a tricky process to get right, but it's very important because users on the train don't want to wait until they get a connection back to write out their message and they should definitely have confidence that the message will be sent eventually. We have a pretty basic protocol that makes sure messages are delivered -- something like a call and response.
James A. Denman asks:
What sort of testing are you using on your mobile apps?
1 ResponseJoin the Discussion