Using their devices' cellular networks, enterprise employees can connect to business applications from almost anywhere....
Unfortunately, enterprise IT has little control over mobile/cellular networks' performance. Mobile applications can be developed to withstand cellular network performance shortcomings, but doing so calls for understanding bandwidth limitations, packet loss risks and other shortcomings, according to Dave Berg, product strategy vice president for network virtualization ISV Shunra.
Testing application performance over mobile networks is often overlooked by enterprises. "Many mobile applications are developed and tested to perform well across broadband networks but not mobile networks," said Berg.
Cellular providers -- such as AT&T and Verizon -- monitor and maintain their own mobile networks. Enterprise IT doesn't have access to AT&T's network to manipulate settings or increase bandwidth, said Berg. The next best thing is recreating actual conditions in a testing lab to accurately assess how applications will behave over real-world networks.
Berg offers advice on mobile network concerns, including mobile application development and testing and DevOps' responsibilities in this interview.
How is mobile networks impact on application performance different from that of broadband networks?
Dave Berg: Mobile networks are inherently different from broadband, with unique characteristics that are more prone to contributing to application performance issues. With mobile you have decreased bandwidth, increased latency, greater chance of packet loss, and jitter.
These four factors effect applications by corrupting data, forcing timeouts, retransmissions, and generally causing unpredictable delays in communication with the end user and throughout the application.
What are some causes of these problems in mobile network performance?
Berg: The physics of mobile networks cause all of these issues. Mobile is a 'through-the-air' connection. This causes slower data transmission rates. Data also has to pass through impediments -- cars, buildings, trees, etc. -- unlike broadband that transmits packets of data on a closed circuit of cables at nearly the speed of light.
The protocols and algorithms mobile network providers employ to route traffic can also affect these conditions.
There are some best practices for how you develop mobile applications as opposed to web-based apps. Have the app make fewer data calls. Make calls in parallel rather than serial. Test throughout the development lifecycle. However, proper testing with consideration for the mobile network is the only tried-and-true best practice for mitigating the risk of limited bandwidth, latency, jitter and packet loss.
In what other ways can organizations build higher-performing mobile applications?
Berg: Developing and deploying better applications means organizations need better approaches to testing. Most organizations are used to testing in a lab across relatively standard and stable broadband network conditions. As a result, we see many mobile applications developed and tested to perform well across broadband networks but not mobile networks. Organizations need to test for performance on virtualized networks that accurately represent the conditions end users experience.
Performance also must shift left and be incorporated in all stages of the development lifecycle. This improves the quality of applications by catching, resolving, or completely avoiding issues sooner. It is also more cost-effective and efficient to resolve issues earlier in the development lifecycle.
For example, Agile development teams can run automated tests on snippets of code on a desktop before they are combined for larger unit or whole application tests. It is faster and cheaper to remediate performance issues this way. And it prevents poor performing applications from getting anywhere near deployment and into the hands of end users.
Who in IT is responsible for mobile application performance?
Berg: It's no longer good enough for just the QA team to test or operations to monitor for performance. Performance should be a priority and responsibility for all members of an IT organization. This change is needed because it's not just IT units that are charged with applications and their success. Business units and managers are ultimately responsible for the performance or failure of mobile apps.
Poor performance drives users away. Exposure to performance issues causes user abandonment and financial damage in terms of lost revenue or productivity. This is why organizations need to find and fix performance issues in the lab before deployment and before users are affected.
What are some key considerations when testing mobile network performance issues in the lab?
Berg: The combination of virtualized load, services, and network forms the foundation for an accurate and reliable performance test. Unless load and services are tested across a virtualized network, you don't see performance as an end user does. You only see it as it performs under pristine, laboratory conditions.
Since both users – i.e., load -- and services are affected by mobile network conditions, you need to account for these fluctuations and variation when performance testing. Wi-Fi only tests deliver an inaccurate assessment of user and services impact. They don't give an accurate picture of how an application will perform in the real world.
An end user switching between cell towers on a 3G network will not experience pristine network conditions. Neither will an end user on a LTE network walking into an airport. Those are the conditions you need to test over for an accurate diagnosis of performance issues.
What's the connection between application monitoring and performance improvement?
Berg: Application monitoring is absolutely needed for performance management. It's important to know what customers are experiencing.
That information, paired with network conditions that can be captured in the real world and recreated in test labs, helps uncover problems not detected prior to deployment.
In this way, monitoring and network virtualization enable a DevOps approach to the development lifecycle that maintains a focus on end user experience. In this DevOps model, Operations can deliver real-world data to developers and testers, who use it for more accurate testing. This improves the quality of applications that then are deployed and managed by Operations.
It also helps to fine-tune or readjust performance service level agreements (SLAs) and monitoring alarms/thresholds. If a SLA or alarm was created without the benefit of testing over a virtualized network, chances are it was pure guesswork. The more real-world data you can incorporate into testing, the more accurate a SLA or alarm you can create. This helps set appropriate expectations with end users and can even protect an organization from lawsuits and financial penalties based on broken contractual agreements.
What advice do you have for developers and testers looking to improve application performance?
Berg: Always prepare for the worst. Whether it's a crush of users responding to the launch of a new product, the loss of a cloud server, or degraded mobile network conditions, always assume and test your application on these worst-case scenarios.
Be scientific about your testing. Have a test lab where conditions can be verified, recreated, and modified accurately and repeatedly.
No matter how much testing you do, chances are something will go wrong at some point. Design the application to fail gracefully and provide useful information to the end user. Notifying them of the issue goes a long way to upholding reputation and buying time to fix the issue.