In this application performance testing tutorial, find tips, how-tos and screencasts on best practices from testers and experts in the software testing industry.
Software performance optimization techniques and strategies
Application performance testing challenges
Application performance testing tips and how-tos
Performance testers' application performance stories, case studies
Recently the word "app" entered our common lexicon as a proliferation of applications changed every aspect of our lives, from how and where we conduct business to how we consume our news. With so much of our lives riding on applications -- including business success -- getting strong performance from those apps has become a mission-critical concern.
"If 2009 was the year of the application, 2010 is bound to be the year of application performance," Theresa Lanowitz, founder of voke, told SearchSoftwareQuality recently. "Software runs the business, and software performance is critical to business success."
The traditional answer to poor application performance was to throw more hardware at the problem. But this is no longer viable as IT organizations must do more with fewer resources and market forces drive businesses to differentiate from their competitors. "It's easy to walk away from one application and go to another if the application doesn't suit your needs," said Lanowitz.
As a result, application performance testing is receiving more attention within IT and from lines of business. According to Lanowitz, performance testing skills are at a premium. This is fortunate for application performance testers, as it ensures a degree of job security. But even performance testers are tasked with doing more with less as an increasing number of applications and application changes come down the software development pipeline.
The goal of any software developer should be to optimize an application for performance before
it goes into production. To do so, software developers and performance testers must learn how to
incorporate performance considerations into the development
lifecycle, starting with the requirements phase. They must also improve their testing and
Back to the top
Gathering solid requirements is always a challenge. Gathering performance requirements adds an extra level of complexity, according to Oliver Erlewein, test manager for Datacom, an IT services firm. Stakeholders and customers are often unable to articulate useful testable requirements. In this article by Oliver Erlewein, seven useful tips are given for determining appropriate performance requirements that can be tested throughout the development lifecycle. Knowing the right conversations to have with stakeholders and technical project team members will lead to high-quality documentation of quantifiable, detailed performance requirements. With this, the team is set to focus on the right tests and ensure the product will perform at its best, leaving customers smiling.
Performance is often based on trade-offs that occur throughout the application lifecycle. Sometimes simple application lifecycle performance testing and monitoring strategies, such as tying performance metrics back to business metrics, help software test and APM teams manage some of those tradeoffs.
For example, by considering how various consumers of a service will use it and when, each test scenario tells a story. "When you find a problem, you're not telling the project team, 'When we hit X concurrent transactions we slow down 20%.' You're instead saying, 'When client X logs in each day, all transactions currently being processed by the system will fall outside of tolerance, is that okay?'," said Kelly in a recent SearchSoftwareQuality interview.
In order to determine whether an application is "fast enough," you need performance metrics for how fast the system is currently. With these metrics you can have a conversation with stakeholders to define how fast is actually fast enough. Boutique tester Matt Heusser and Erlewein have worked on projects wherein they increased performance by aligning test metrics.
Performance seems to be intangible and difficult to measure, said Heusser, but it's not. In the tip above, he and Erlewein describe different, and not difficult, means for gathering and interpreting metrics, including using common graphs and terminology. Actually, you can use a browser, server logs, browser plugins, a calculator, and even a stopwatch to evaluate application performance without a performance testing tool. Doing so, you "can focus your testing or mitigate risks early on. It often precludes or skips the step of 'proof by metrics'," Heusser and Erlewein concluded.
A common performance problem that occurs in the requirements phase of an application build is that no one knows what level of performance is required of the application. In this screencast, test consultant Kelly advises software pros to focus on the performance aspect of application builds. He also discusses other common performance problems that arise during application builds and how to solve them.
"Starting with the problem you're trying to solve, and working backwards from there, is the best way to get these types of numbers," said Kelly. "Understanding the overall transaction, and then understanding where the specific timing you're trying to measure fits in, is the best way to figure out what type of SLA you're going to need."
One of the biggest problems former Sun performance test manager Yvette Francino had was "isolating the source of the problem, particularly with intermittent performance problems. "In a complex application, there are so many components and variables that can factor into performance, including all the hardware, the network, and all the software," said Francino, currently site editor for SearchSoftwareQuality.
Performance testing throughout the development lifecycle can help ensure that each component of an application is performing optimally before it is deployed into production. Francino examines how to meet performance testing goals before moving an application into production in this Q&A. She also provides an overview of the types of testing that should be performed during the early stages of design, development and system test to optimize performance regardless of hardware configurations.
In Mike Kelly's work doing application
performance testing across company networks, he has found that performance testing applications
for a distributed environment requires taking into account unique challenges introduced by the
environment. Concerned with how the WAN will affect performance of an application at specific
locations, Kelly has used such strategies as shipping a small army of laptops to remote offices,
each with a load generator setup and configured. With the laptops in place, he's explored mixed
rollout models, where some users access the system directly (http/https) while others access the
system using Citrix.
Back to the top
Time and again software development and testing experts point to rookie mistakes and amateur practices as performance killers. Sometimes, however, these mistakes aren't only made by newbies. Tight deadlines, understaffing and the complexity of applications and new environments -- including cloud, Web and virtual appliances and machine -- make application performance management (APM) challenging for software test veterans to err, too.
With the proliferation of applications, software developers are faced with three primary challenges: budget/resource constraints, customer satisfaction and the organization. Analysing the evolution of APM tools, trends and challenges in this screencast, Lanowitz sees opportunities in automated tools, Agile processes and elsewhere. IT, project and test managers have the opportunity to initiate a global lifecycle transformation that shatters silos, breaks down bottlenecks, and delivers valuable and predictable business outcomes. She pointed to a recent voke study on how performance skills are being used in organizations today, noting that software developers and managers have implemented automated tools, centralized APM and Agile processes to actually improve application performance despite skills and staffing shortages.
Sometimes performance problems show up in production despite baseline, load and stress testing. SearchSoftwareQuality expert Mike Kelly was on a production support project that suffered from various performance issues.
A lack of communication within the testing group complicated matters in the project on which Kelly worked. "After the third month of large production performance issues, we all started to get into the same room together after each test," he said. "It was amazing the difference it made. We started correlating logs and response times, we noticed how certain errors rippled across the system, and we got better about sharing information, access, and tools."
Kelly shared the lessons he learned about resolving
issues in baseline, load and stress testing in this recent Q&A.
Back to the top
When testing Web services' performance with load-testing tool soapUI, you have the choice of a number of strategies.
If you're not sure which one to use, Kelly suggests running concurrent tests using different strategies. "Pick a couple of baseline scenarios -- using either the simple, variance, or thread strategies -- using whichever get you closest to what you believe your usage will look like, and then overlay other tests on top of them," said Kelly. "This is where you develop some what-if scenarios to see how your service responds." Kelly explains how to create and run simple and burst load tests with soapUI in the above-mentioned advice column.
Contrary to popular belief, performance testing doesn't have to be expensive. Development and test consultant Chris McMahon has found that building performance measurements into ongoing testing and into the application itself can help reduce and even eliminate the cost of performance testing.
By creating and analyzing logs of the time it takes for transactions to occur at various interfaces within an application, testers can discover performance bottlenecks indicated by undesirable performance measurements, according to McMahon. Fixing these bottlenecks in the production environment as part of the ongoing work of the development team can result in a uniform performance profile.
Bugs that impact performance can be introduced to applications at any point during the development process. One such bug is inefficient code, according to software test and APM consultant Randy Rice.
"Complex code has a lot of decision points, and if you don't handle them with an eye to efficiency, you can come up with time consuming routines that are performed," explained Rice, principal consultant and vice president of Rice Consulting.
Naturally, code isn't the only culprit. Other common bug producers can be process design
constraints, testing data contention and hardware capacity.
Back to the top
Let's look now at how companies are handling everyday application performance issues, as we hear from test managers from Thomas Reuters, Raymond James' Financial, JN Data and CareGroup.
Before she begins testing on a new project, Cristina Lalley, lead performance test engineer for Thomson Reuters, interviews stakeholders to learn about the project, its features and stakeholders' expectations.
In her daily application performance testing work, Lalley has found that answers to standard questions -- such as "What do you intend to accomplish with performance testing?" -- are helpful when it comes time to design tests and establish objectives and thresholds around them.
When performance problems in production directly affect the bottom line, as they do for SafeAuto Insurance, a pre-emptive approach to eliminating bugs may be the answer.
SafeAuto implemented an application performance management (APM) system that provided visibility across its entire application infrastructure, including .NET, DB2, and SQL. The benefits gained were around-the-clock monitoring of server performance, memory allocation, applications, transaction timing, database traffic and more. In this article, you can discover more about how SafeAuto used this visibility to improve performance and how the implementation of the APM tool was accompanied by a cultural change that also contributed to cost savings.
When considering an APM tool, Raymond James Financial's database administration manager Chad Miller recommends drilling into the details of how the product works, including what the load is and what kind of information it returns. Seven years ago, Miller evaluated APM tools that could find and diagnose performance problems at Raymond James. What sold Miller on one product was the methodology used: wait-stat analysis. This blow-by-blow description of Raymond James' APM plus load testing project describes how Raymond James chose, used and benefitted from choosing an automated APM tool.
Data centers can fall behind on delivering contracted services when application performance issues occur. For Danish data center firm JN Data, the inability to find performance problems before they reached production was resulting in the blame game. And when the data center couldn't identify the root cause of a performance issue, it simply threw more hardware at the problem. That didn't work, so the team took a more strategic approach to performance management, choosing APM tools that helped expose the root cause of performance problems and measure Web application performance for each individual customer by username. This article details JN Data's experience evaluating and using an APM tool to improve performance.
CareGroup HealthCare System in Boston develops about 50% of its internal applications, so maximizing performance for those apps is an in-house job. That's why CareGroup was an early adopter of automated tools to solve application performance, quality issues. The result, as described in the article above, is higher quality applications and few production performance glitches.
Using application monitoring and management tools, the company's application development and management teams have broken down barriers between internal groups and technologies. This is crucial in today's complex application environments where insight into performance on the Web server, the application server, the database server are needed almost simultaneously said Jasmine Noel in the article above. An analyst at Ptak, Noel & Associates LLC, IT operations and management consultants in New York, Noel has found that ensuring performance requires following the transaction through all layers of the application an dIT framework. It's easier to do this if development, testers and the "ops folks" collaborate.
This was first published in September 2010