Software Quality.com

performance testing

By Alexander S. Gillis

What is performance testing?

Performance testing is a testing measure that evaluates the speed, responsiveness and stability of a computer, network, software program or device under a workload. Organizations will run performance tests to identify performance-related bottlenecks.

The goal of performance testing is to identify and nullify the performance bottlenecks in software applications, helping to ensure software quality. Without some form of performance testing in place, system performance may be affected by slow response times and inconsistent experiences between users and the operating system (OS).

In turn, this creates an overall poor user experience (UX). Performance testing helps determine if a developed system meets speed, responsiveness and stability requirements while under workloads to help ensure more positive UX.

Performance tests should be conducted once functional testing is completed.

Performance tests may be written by developers and can also be a part of code review processes. Performance test case scenarios can be transported between environments -- for example, between development teams testing in a live environment or environments that operations teams monitor. Performance testing can involve quantitative tests done in a lab or in production environments.

In performance tests, requirements should be identified and tested. Typical parameters include processing speed, data transfer rates, network bandwidth and throughput, workload efficiency and reliability.

As an example, an organization can measure the response time of a program when a user requests an action; the same can be done at scale. If the response times are slow, then this means developers should test to find the location of the bottleneck.

Why use performance testing?

There are a number of reasons an organization may want to use performance testing, including the following:

Performance testing metrics

A number of performance metrics, or key performance indicators (KPIs), can help an organization evaluate current performance.

Performance metrics commonly include the following:

These metrics and others help an organization perform multiple types of performance tests.

How to conduct performance testing

Because testers can conduct performance testing with different types of metrics, the process can vary greatly. However, a generic process may look like this:

  1. Identify the testing environment. This includes test and production environments, as well as testing tools. Understanding the details of the hardware, software and network configurations helps find possible performance issues, as well as aid in creating better tests.
  2. Identify and define acceptable performance criteria. This should include performance goals and constraints for metrics. For example, defined performance criteria could be response time, throughput and resource allocation.
  3. Plan the performance test. Test all possible use cases. Build test cases and test scripts around performance metrics.
  4. Configure and implement test design environment. Arrange resources to prepare the test environment, and then implement the test design.
  5. Run the test. While testing, developers should also monitor the test.
  6. Analyze and retest. Look over the resulting test data, and share it with the project team. After any fine-tuning, retest to see if there is an increase or decrease in performance.

Organizations should find testing tools that can best automate their performance testing process. In addition, do not make changes to the testing environments between tests.

Types of performance testing

There are two main performance testing methods: load testing and stress testing. However, there are numerous other types of testing methods developers can use to determine performance. Some performance test types are the following:

Cloud performance testing

Developers can carry out performance testing in the cloud as well. Cloud performance testing has the benefit of being able to test applications at a larger scale, while also maintaining the cost benefits of being in the cloud.

At first, organizations thought moving performance testing to the cloud would ease the performance testing process, while making it more scalable. The thought process was they could offload the process to the cloud, and that would solve all their problems. However, when organizations began doing this, they started to find that there were still issues in conducting performance testing in the cloud, as the organization won't have in-depth, white box knowledge on the cloud provider's side.

One of the challenges with moving an application from an on-premises environment to the cloud is complacency. Developers and IT staff may assume that the application works the same once it reaches the cloud. They might minimize testing and quality assurance, deciding instead to proceed with a quick rollout. Because the application is being tested on another vendor's hardware, testing may not be as accurate as on-premises testing.

Development and operations teams should check for security gaps; conduct load testing; assess scalability; consider UX; and map servers, ports and paths.

Interapplication communication can be one of the biggest issues in moving an app to the cloud. Cloud environments typically have more security restrictions on internal communications than on-premises environments. An organization should construct a complete map of which servers, ports and communication paths the application uses before moving to the cloud. Conducting performance monitoring may help as well.

Performance testing challenges

Some challenges within performance testing are as follows:

Performance testing tools

An IT team can use a variety of performance test tools, depending on its needs and preferences. Some examples of performance testing tools are the following:

Learn more about three application performance testing objectives.

09 Mar 2023

All Rights Reserved, Copyright 2006 - 2024, TechTarget | Read our Privacy Statement