Acceptable application response times vs. industry standard

Rather than asking what the industry standard response time for an application is, ask what response time will your users find acceptable. Using a faulty industry standard could lead to the development of frustrating applications, testing expert Scott Barber says.

Scott Barber, software tester
Scott Barber

It feels like hardly a single day has passed in the past six years that someone hasn't asked me this questions: "What is the industry standard response time for a Web page?"

And in the past six years, the answer hasn't changed, not even a little bit. So if the answer hasn't changed, why am I still getting asked the question on virtually a daily basis?

The answer is simple. It's because there are no industry standards. How could there be? Think about how you use the Web. How long were you willing to wait for this page to load? How long are you willing to wait to view your family's online photo album? How long are you willing to wait for your tax software to confirm that your return has been submitted successfully? Are those numbers the same when you are at home as when you are at work? How about when you are using the wireless connection in an airport?

Your actual numbers don't really matter. The point is that no one number could possibly be the answer -- at least until Web pages start regularly having response times fewer than .25 seconds. Until then, what you are measuring is a combination of your current expectations about Web page response time and your determination to accomplish tasks via the Web.

This is because by the early 1980s cognitive psychologists had already determined that a delay of longer than one quarter of a second between an action and a response, on a computer or otherwise, would noticeably impact human performance, increasing error rate and increasing the probability of switching to a competing task. So, as far as I'm concerned, until our Web sites make it to that .25 second barrier, what matters more than agreeing on a standard is staying ahead of the expectations of our users.

For years, the most commonly quoted standard was the so-called "8-second rule." This was based on some research Nielsen Media conducted in the late 1990s, which concluded that most Internet users wouldn't give up on the task they were trying to accomplish as long as the Web site responded in 8 seconds or fewer. While that was certainly an interesting piece of research, it had nothing to do with user satisfaction nor was it ever intended as an industry standard. What it did measure was the degree to which people had come to accept that, if they wanted to accomplish a task on the Web, 8 seconds was how long it was bound to take over their 33.6 kbs modems. I can assure you that if those users had been presented with an option of one site with an 8-second response time and a competing site with a 3-second response time, they would have flocked to the 3-second site without a second thought.

In November of 2006, a new study popped up that almost immediately replaced the "8-second rule" with a "4-second rule." The title of the press release is "Akamai and Jupiter Research Identify '4 Seconds' as the New Threshold of Acceptability for Retail Web Page Response Times" and its first line reads as follows:

"CAMBRIDGE, MA — November 6, 2006 -- Four seconds is the maximum length of time an average online shopper will wait for a Web page to load before potentially abandoning a retail site."

As this claim cut the existing "rule" in half, I found it to be an intriguing finding, so I downloaded the whole report, only to find out that this "new rule" was determined by collecting 1,058 responses to the following survey question:

"Question: Typically, how long are you willing to wait for a single Web page to load before leaving the Web site? (Select one.)
A. More than 6 seconds.
B. 5-6 seconds.
C. 3-4 seconds.
D. 1-2 seconds.
E. Less than 1 second."

The real question is not "What is the industry standard?" but rather "What response time will the users of my Web site or application find acceptable?"
,

Clearly, this "new rule" is no more an industry standard than the Nielsen research from nearly a decade before. The Nielsen research was at least observationally accurate, if misused; this research simply demonstrates that we all learned the same rule for taking multiple choice tests in junior high school: "When you have no idea what the correct answer is, pick C; you might get lucky."

Try it yourself. Ask the person in the office next to you this question and see what his or her answer is. Then ask your guinea pig to surf the Web and find a Web page that loads in the same time bracket as his or her answer. Use your watch to see how close he or she is to estimating the load time. Do that with 10 people and see what kind of accuracy you get.

I have been doing performance testing long enough to know that Web surfers have no idea how long 4 seconds is. In fact, I promise that if someone were to sit down with those respondents and ask them to identify how many seconds various pages took to load, *most* of them would not get it right, and we would find that *most* of the wrong ones *think* a page takes longer to load than it actually does.

The real question is not "What is the industry standard?" but rather "What response time will the users of my Web site or application find acceptable?" The challenge is that determining what your users are going to deem "acceptable" is both difficult and subject to significant changes over short periods of time. Software development shops don't want to do regular usability studies with groups of representative users because it is time-consuming and expensive. For the most part, they don't have the resources or the training to conduct those usability studies even if they wanted to, which is why so many folks keep latching onto narrowly conducted anecdotal research and proclaiming a standard. The real problem is that defaulting to a faulty standard is actually more likely to lead people to develop and release Web sites that users find frustrating due to poor performance than if those same people just sat down and used the site, deciding whether performance was good enough based on how it felt.

----------------------------------------
About the author: Scott Barber is the chief technologist of PerfTestPlus, executive director of the Association for Software Testing and co-founder of the Workshop on Performance and Reliability.


This was first published in March 2007

Dig deeper on Software Quality Management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchSOA

TheServerSide

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close