Manage Learn to apply best practices and optimize your operations.

Easing software performance testing and usability modeling pressures

Learn how to effectively test software performance characteristics, build strong test strategies and how to use proven modeling techniques in this expert tip. Matt Heusser teaches users and testers how to create strong tests that can offer improved performance quality and deliver timely Service Level Capability with ease.

Like quality, performance seems to be intangible and easier to ignore than to measure. Often, rather than being...

handed to capacity engineers, performance testing just might end up shoved onto your desk −− likely three or four days before the app goes live.

In this tip, I offer some performance testing practices that can help you make some quick wins when the boss comes in and says: "We want to go live with the BigBox project on Monday. I need you to test it." This advice comes from own experience; but I also must thank the peer reviewers for this tip: performance testing experts Alan Jorgensen, Paul Carvalho, and Jon Bach.

This tip doesn't go into key practices for building capacity test strategies. In theory, at the beginning of a project, a team of usability experts and capacity engineers plan the potential capability of the project system. Then they develop a test strategy that they can use to monitor performance and make adjustments. This is a good idea, and one I wholly support; but it's not covered in this article.

User modeling and qualitative assessment

Let's start with modeling what the users do and answer the tough question: "Is it fast enough?" Most of the time "fast enough" is never really defined. The typical textbook approach is to sit down with the customers and fight to consensus on exact response times in seconds. To be frank, I have not seen a lot of success with this method. When I've seen it tried, the teams have first spent a lot of time not testing, and then claimed the software met the specification, only to have an executive say something along the lines of "I don't care if we agreed to 5.7 seconds! When users actually experience a delay of five seconds, they think it's too slow. Make it faster, they say.

So, instead of trying to negotiate a Service Level Agreement (SLA), I suggest a different tactic: Just test it to determine the Service Level Capability (SLC). Then tell management and the customer what the capability is, preferably, by having the customers experience it. If that is fast enough, great. If not, then great. Testing has done its job by finding the problem. Solving the problem is an issue for the developers and is not a testing issue.

To do the testing, I suggest starting by modeling the application's use. A fancy way to do that is with User Community Modeling Language (UCML), introduced by the consultant Scott Barber. The short description of UCML is that you map out the key pages of the application and how they transition. Then, you try to break down the walk users will take in terms of: "From the home page, 50 percent of users will search; 25 percent of users will go to shopping cart; 25 percent will go to checkout" and so on. Using those percentages, you can write a load test to take random walks down the app and measure how long it takes between steps.

Unfortunately, load testing is a lot of work, and your boss will likely want answers fast. So one place to start is to walk through those steps yourself -- twice. Do it once with a stopwatch. Then, do it once with a pain chart, marking how you feel. After all, the difference between a 3-second delay and a 5-second delay might not make a difference to a decision-maker, but the difference between a small smile and a large frown is a big deal. Another way to do this, which I learned from Barber, is to perform the traditional functional tests, and for every check notice the level of user annoyance by the same pain chart.

But what about load?

In some cases, the software will be unacceptably slow for even a single user. In that case, testing is done; it's time for fixing code, scaling the network or the server. If you want to continue to do load testing, you can test for predicted use (load testing), or keep going until the software completely fails to understand behavior at the limits of performance, which I will call "stress testing."

To calculate the user base you can support, I suggest some back−of−the envelope usage math, in terms of this: "For so many possible users of our software, only so many will be using the software at a time. Of those online at a time, we expect a user action every so many (30?) seconds. Our software, however, will be clicking constantly, only waiting for UI delays. So if our simulation clicks every five seconds, then one continuous use is equal to six real−world users." To start, instrument the server logs to find out how long it is between clicks for one user; you might be surprised.

Now when I do load testing, my goal is not to give management a simple "yes/no" but to provide information or the capability of how the software degrades over load. How to accomplish this requires a management decision.

Now here are three quick ways to actually generate that load:

  • First, you may find that asking the entire team or department to "hit the software" for an hour, will simulate a reasonably large number of users. It's quick, dirty, cheap and unlikely to make assumptions about user behavior that are incorrect.
  • A second option is to use tools like JMeter for open-source or LoadRunner which is more commercial, to simulate load on a web server. But be careful. A bad assumption in a load test invalidates the test, and those 'cheap' load tests are likely to contain assumptions. For example, I find that when I use such tools and simulate more than 20 users, the bottleneck is often my pipe to the internet; things slow down for me that might not for distributed users. If you must use a tool like JMeter, I would advise putting the server and the 'hammer' computer in the same facility, connected by Gigabit Ethernet, and monitoring traffic.
  • Finally, you could work with a company like BrowserMob or SoaSTA that can simulate the traffic from all over the world. It sounds appealing to outsource load testing, but that can be time−consuming and expensive to set up. All of a sudden, we aren't just testing, it is a performance project, or a fixing project.

Adding performance value

Let's say I provide the numbers to the boss, saying the software becomes unacceptably slow with five simultaneous users and falls over entirely at 10. We argue that "we're done." But the boss doesn't want the failure note, he wants it to meet the requirements. The next step in our adding value as performance testers is in figuring out where the bottleneck is.

There are a couple of ways to do to this. You could be a generalist with a deep knowledge of performance monitoring for the network, the CPU, the hard drive, JavaScript in the browser, AJAX and so on. Or you could be a sleuth who knows who to ask and how to get the right answers. In my experience, a good team views performance as everyone's problem, and the whole-team approach is a lot more helpful.

Sometimes, you just won't be able to get the software fast enough. In that case, I try to provide options to management, instead of statistics. For example, on one project, we went live with support for Firefox 3, but not Internet Explorer. IE was "use at your own risk," because the JavaScript was incredibly slow. That allowed us to hit the deadline and save face at the same time.

Schedule reality

No, it's not fair that so many projects put off performance testing until the end. Yes, it would be helpful to change the culture and architecture to make it possible to performance test early and often, and that might make a nice follow−up article. In the meantime, I hope I've provided some concrete ideas for you to consider when dealing with the world as it is −− not just how we'd like it to be.


About the author: Matt Heusser is a technical staff member of SocialText, which he joined in 2008 after 11 years of developing, testing and/or managing software projects. He teaches informaton systems courses at Calvin College and is the original lead organizer of the Great Lakes Software Excellence Conference, now in it's fourth year. He writes about the dynamics of testing and development on his blog,Creative Chaos.

This was last published in November 2009

Dig Deeper on Stress, Load and Software Performance Testing

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchMicroservices

TheServerSide.com

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

DevOpsAgenda

Close