Get started Bring yourself up to speed with our introductory content.

Take an in-depth look at the definition of performance testing

What's in a name? The definition of performance testing elicits more confusion than clarity. To find some direction, keep stakeholders in mind when you design performance tests.

What is performance testing? While it seems like a simple question, there's some confusion that surrounds the...

term.

Whether they've seen the definition of performance testing or conducted performance tests, IT professionals seem hesitant to step back and break down the specifics of the term -- in part, because the task is complex. While I worked on some new training material for college students, I typed this question on the top of the first content slide. I figured a definition for performance testing was an easy place to start to ensure the class had a common foundation. But, after a half-hour of typing and deleting information on that slide, I discovered this isn't such an easy common ground to find.

A look through my previous training materials, articles and notes from workshops, as well as books and articles, didn't reveal a single definition of performance testing that really fit. I jotted down a dozen bullet points on the slide that roughly addressed the definition -- and I've been a performance tester for seven years. If you ask an accountant, "What is accounting?" every six months during her first seven years in the field, you'd worry if you got a dozen different answers. So, what makes this term so complex?

Searching for a definition

One old description defines performance testing as "a category related to speed, scalability and stability." That answer is easy to remember, and one can support that explanation with more detailed discussions about speed, scalability and stability as they relate to software code. But this definition lacks context.

Yet, one can blend that explanation with Cem Kaner's definition of performance testing, as I described in this article, to yield this result:

Performance testing is an empirical, technical investigation conducted to provide stakeholders with information about the quality of the product or service under test with regard to speed, scalability and/or stability characteristics.

To date, this combination is my preferred definition, but it's not perfect. In several conference talks, I stated: "Performance testing is a superset of load, stress and endurance testing." Load testing and stress testing are commonly misused synonyms for performance testing. However, that sentence amounts to a buzzword bingo card.

I've also contrasted performance tests and functional tests:

Functional testing is (most frequently) conducted to determine whether or not an application can do what it is intended to do without (too many) errors. Performance testing is (most frequently) conducted to determine whether or not an application will do what it is intended to do acceptably in reality.

This answer is useful in some situations, but not all. Also, this description is not particularly descriptive, and it requires that a person understand functional testing.

Goals of performance testing

Ultimately, I decided to take a different approach to the definition of performance testing -- its aim. I thought about the value stakeholders hope to achieve via performance testing, including:

  • predictions or estimates of various performance characteristics that end users are likely to encounter when using the application;
  • how these performance characteristics, such as throughput, response time and other metrics, compare to those of competitive applications;
  • identification of existing or potential bottlenecks and workload performance defects that are likely to detract from user satisfaction;
  • assessments of the accuracy of scalability and/or capacity planning models based on actual production usage; and
  • identification of existing or potential functional errors that might manifest in multiuser environments.

Looking at that list, I noticed some commonalities that are relatively unique to performance testing:

  • realistic multiuser simulations;
  • user satisfaction;
  • identification of potential defects that are unlikely to be detected via other categories of testing; and
  • subjectivity in determining the quality of test results.

Put all of that together, and the reasonable definition for performance testing could be:

Performance testing is a method of investigating quality-related characteristics of an application that may impact actual users by subjecting it to reality-based simulations.

Embrace ambiguity

Performance testing isn't the only term or task that is challenging to define and describe; it is a common hurdle with young and evolving fields of work. For that reason, it might be a long time before we, in the software testing industry, converge on definitions and descriptions, which can be just as advantageous as it is frustrating. I'd rather work with an evolving set of definitions and descriptions than face the risk of an outmoded yet staid definition. For example, the astronomical community dealt with this upheaval as a result of a late realization that the long-standing definition of planet no longer made sense, after generations of universal acceptance.

While I'm still not sure what language goes on that opening slide, I might want to spend the entire first hour of class on performance testing and explain it in greater depth.

Scott Barber is the chief technologist of PerfTestPlus, executive director of the Association for Software Testing and co-founder of the Workshop on Performance and Reliability.

This was last published in March 2007

Dig Deeper on Stress, Load and Software Performance Testing

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

How would you define performance testing?
Cancel

-ADS BY GOOGLE

SearchMicroservices

TheServerSide.com

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchHRSoftware

SearchHealthIT

DevOpsAgenda

Close