Improving software performance: Mobile, cloud computing demand APM

Learn strategies for developing and maintaining software performance that delivers business value and satisfies mobile and cloud computing users' demand for instant application access. Experts advise on requirements, testing and application lifecycle management.

Application performance still takes a back seat to speedy development these days, even though slow application

performance is commonplace, according to software performance experts Scott Barber and Theresa Lanowitz. They see the minority of organizations creating and maintaining cradle-to-grave application lifecycle performance strategies, even though most businesses are so dependent on software today.

That laissez-faire attitude could soon lead to a fall, due to the rise of cloud computing, enterprise-connected mobile devices and the consumerization of software applications, according to these experts. Users’ tolerance for slowdowns and outages is wearing thin.

“Now that people want to use IT apps -- like SAP -- from anywhere, anytime, on any device, performance becomes of utmost importance,” said Lanowitz, analyst and founder of voke inc. Already, she noted, there is dissatisfaction with cloud computing because service providers are finding it hard to establish and live up to service level agreements.

Accommodating growing demand for better application performance is very challenging, because software environments are more complex than ever before. “We’re not talking websites anymore. We’re talking Web application, multi-users on shared backend, many technologies put into client-side, server-side,” said Barber, president of PerfTestPlus, a software performance consultancy. “A speed-only focus won’t do it.”

In this article, Lanowitz and Barber analyze the current software performance scene and offer best practices for building strong performance into software and tuning performance to business goals.

Connecting software performance with business goals

“Business and development execs are still not connecting software performance to business value,” said consultant Scott Barber, who is co-author of Performance Testing Guidance for Web Applications. “They don’t really understand what is encompassed under the performance umbrella.”

Often, there is a disconnect between what happens in customer service and what is reported to IT and business, said Lanowitz. Frequently, the performance metrics aren’t set appropriately, so that IT only sees a problem that’s really, really bad. While customer service has a slowdown, in IT all metrics are green. Sometimes the metrics that the business side is seeing reveal less detail than what IT sees.

“Business and IT aren’t seeing reality in the metrics,” she said.

This common disconnect usually occurs because business and tech-side executives are not actually talking to each other about how performance relates to delivered value. Development and IT are setting metric targets that don’t relate to the desired business goal. So, executives are getting numbers and not answers. “If the numbers had answers buried in them, the answers were lost in translation,” Barber said.

Business executives usually will send down requests for application performance, saying it needs to be fast or support some ridiculous number of users, said Barber. By the time that request makes it down to people who are designing, building and testing the new application, the logic behind the request gets lost.

“Development teams I’ve worked with usually believe they were delivering what they were asked for,” said Barber. “If they’re not, it’s because top execs aren’t talking to them.”

To match up app performance delivery to business value, business executives need to meet with the development team and talk about concerns, rather than passing requirements down through channels, according to both Barber and Lanowitz.

“So many other problems go away when people building the application have a clear understanding of business goals and risks, and decision makers have a clear understanding of what is technically possible for developers to do,” said Barber. When business and development teams meet together to discuss software performance requirements, these questions are among those that should always be asked.

  • What is the core risk that software performance can mitigate?
  • What business goals are enabled by strong software performance?
  • How can we quantify and set targets for performance in terms of business value? (i.e., views of informational pages; sales increases; time to completion of transaction; transaction volume; accommodating peaks; reduction in phone calls or emails; etc.?)
  • What investment is needed to get optimum application performance? What is the return-on-investment in terms of business value and dollars saved and/or made?
  • How can the company the balance time-to-market and performance goals?
  • How do we address all the components of performance -- speed, reliability, stability, etc. -- in a coordinated effort to get all the pieces performing well?
  • How can we get everyone on the team to think of their daily tasks -- whether they’re writing code, configuring systems, testing, planning marketing, etc. -- in terms of meeting business goals and reducing risk?
  • Before purchasing performance tools, how can we test them to make sure they will deliver what’s needed to meet business goals?

Test application performance from the get-go

Testing application performance early in the development process is the best and easiest way to ensure that goals will be met in production, our experts agreed.

The notion that the only performance testing needed is load testing just before going into production is still prevalent, but just plain wrong, Lanowitz said. Waiting to test at the end of production almost ensures that it will be too late to fix defects, particularly architecture defects. As a result, the organization has to live with those defects through the application’s whole lifecycle.

“You can’t just run some scripts at the end,” said Lanowitz. “If that’s what you’re doing, how will you know if you can handle a burst in activity, like a spike in shopping or a big event?”

Software architects, database administrators, testers and their peers should put timers on functions and procedures when doing unit testing, Barber said. This is a minutes-a-week way to check at the basic level to make sure that something isn’t fundamentally clogging performance.

“Then, when you get to the end of development, and you run that big load test, you’re going to be dealing with real issues rather than what I call little oopsies, like license renewals,” said Barber. When you’re two weeks from delivery, you’ll be tracking and fixing real issues and not a bunch of little things.

Performance testing should be regularly done during an application lifecycle, right up to decommissioning. Consider, said Lanowitz, that late lifecycle road testing that contributes capacity planning and performance planning information for next version. Naturally, this information should be evaluated by a business-tech team. Without cross-team coordination, you end up with a ton of improvements in the next version that might not make a difference to application performance as a whole.

While test experts should be handling the testing process, the entire team must set the parameters for tests, making sure that tests deliver information useful to all parties. For example, dev/test people may be focused on testing binary requirements or quality of code, and business people on retaining customers. Figure out how to relate the two, advised Barber.

Investing in superior software performance

Research by voke inc. has shown that companies think it’s too expensive to get top performance from software. They are not considering -- or, more importantly, quantifying -- the dollars, cents and customers lost as a result of not investing, Lanowitz said.

For example, Shopzilla recently increased sales by 10 percent by speeding up website performance from seven to two seconds per page load. Also, a day of lost sales and angry customers -- who set Twitter afire -- occurred when the London 2012 Olympics ticket site failed.

Putting an emphasis on application performance is not a costly, headcount-intensive process, said Lanowitz, and neither is creating meaningful performance checkpoints and assessment throughout the lifecycle. Barber agreed, noting that executive commitment is what counts, not spending money on performance tools.

“Figure out how to be proactive about software performance, because demand for better performance is going to increase,” said Barber.

This was first published in July 2011

Dig deeper on Application Lifecycle Management Tools and Processes

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchSOA

TheServerSide

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close