What is performance testing?

Determining what exactly performance testing is proves to be more difficult than you'd expect. Testing expert Scott Barber attempts to pinpoint a definition while recognizing that it may be impossible for the industry to settle on one set explanation.

What is performance testing? That seems like a silly question, doesn't it? I mean, we've all seen definitions for performance testing. We've conducted performance tests -- or been on projects where performance testing is conducted. But what is it really? And why is it that even when there seems to be obvious confusion about what performance testing is and is not, people seem hesitant to step back and ask "What do you mean when you refer...

to performance testing?"

While I was working on some new training material the other day, I typed exactly this question on the top of the first content slide for what is to become a course for the University of California Extension, Santa Cruz. I figured this was a nice easy place to start, to ensure the class started out with a common foundation. After about half an hour of typing and deleting information on that slide, it dawned on me that this really isn't such an easy place to start after all.

I looked back through my previous training material, articles, notes from workshops, and books and articles by my biggest influencers, and I didn't find a single description that I really liked. I quickly jotted down a dozen bullet points on the slide that roughly equated to my evolving answers to the question "What is performance testing?" A DOZEN! And I've only been a performance tester for seven years! If you were to ask an accountant "What is accounting?" every six months during his or her first seven years in the field, I'd wager that you'd be worried if you got a dozen different answers. So what makes this question so much harder for this field than it seems like it ought to be?

The oldest description among my resources is "Performance testing is testing related to speed, scalability and stability." I like that answer because it's easy for me to remember, and it's easy to teach in a class where I have supporting materials for "speed," "scalability" and "stability." But as a one-sentence answer with no additional context, it can mean just about anything.

Blending this with Cem Kaner's definition of software testing yields this definition:

"Performance testing is an empirical, technical investigation conducted to provide stakeholders with information about the quality of the product or service under test with regard to speed, scalability and/or stability characteristics."

While, to date, this remains my preferred definition, it's only marginally more useful than answering the question "What is green?" by saying that green is "a color with many different shades, all within a wavelength of roughly 520–570 NM."

In several conference talks, I've stated that "performance testing is a superset of load, stress and endurance testing." I find this description to be useful because load and stress testing are so commonly misused as synonyms for performance testing. Outside of making that point, however, that sentence is nothing more than most of a Buzzword Bingo card.

At other times I've contrasted performance testing to functional testing by stating that:

"Functional testing is (most frequently) conducted to determine whether or not an application can do what it is intended to do without (too many) errors. Performance testing is (most frequently) conducted to determine whether or not an application will do what it is intended to do acceptably in reality."

Once again, that is an answer that is useful in some situations, but not all. Besides, it's still not particularly descriptive, and it requires that a person have an understanding of functional testing to even be useful.

Deciding to take a different approach, I started thinking about the value stakeholders hope to achieve via performance testing:

  • Predictions or estimates of various performance characteristics that end users are likely to encounter when using the application.

  • Information about how various performance characteristics of the application in production will relate to real or perceived performance requirements and/or competitive applications.

  • Identification of existing or potential bottlenecks and performance defects that are likely to detract from user satisfaction if not resolved prior to releasing the application to production.

  • Assessments of the accuracy of scalability and/or capacity planning models based on actual production usage.

  • Identification of existing or potential functional errors that aren't detectable with single-user scenarios but can or will manifest under multi-user scenarios.

Looking at that list, I noticed some commonalities that are (at least relatively) unique to performance testing:

  • Realistic multi-user simulations.
  • User satisfaction.
  • Identification of defects, or potential defects, that are unlikely to be detected via other categories of testing.
  • Subjectivity in determining the "goodness" of test results.

Putting all of that together, it seems to me that a reasonable answer to the question "What is performance testing?" may be the following:

"Performance testing is a method of investigating quality-related characteristics of an application that may impact actual users by subjecting it to reality-based simulations."

The challenge of defining and describing what we do isn't limited to performance testing, but it is a common challenge with young and evolving fields. For that reason, it will probably be a very long time before we, as an industry, converge on definitions and descriptions we like. This is a reality that we should accept as advantageous rather than being frustrated by. I look at it this way: I'd rather work with a gradually evolving set of definitions and descriptions than end up having to face the challenges the astronomical community is currently dealing with as a result of a late-game realization that the long-standing definition of "planet" had ceased making sense after generations of universal acceptance.

So, after all of that, I'm still not sure what goes on that slide to kick off the training class. For that matter, I'm thinking I might want to spend the entire first hour of class on "What is performance testing?" -- not just the first slide.

----------------------------------------
Scott Barber, software testerAbout the author: Scott Barber is the chief technologist of PerfTestPlus, executive director of the Association for Software Testing and co-founder of the Workshop on Performance and Reliability.

This was first published in March 2007
This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSOA

TheServerSide

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close