Testing Web services' performance with soapUI

Learn how to write load tests, TestCases and run them with soapUI in this expert tutorial. SoapUI is great for tracking test criteria statistics and locating problem areas are.

soapUI load testing terminology

Like any load-testing tool, soapUI uses multiple threads to generate load. The pool of threads used for load testing is measured by what soapUI calls the thread-count. This count specifies how many TestCases are run in parallel during your test. The management of the thread-count and how tests are distributed is controlled by which load test "strategy" you use.

Strategies are different methods of distributing load, and they can be used in isolation or together to create various load models. At the time of this writing, the free version of soapUI supports the following strategies:

  • Simple: TestCase execution with a configurable delay
  • Variance: TestCase execution varying the number of threads over time
  • Burst: TestCase execution in "bursts"
  • Thread: TestCase execution with a fixed thread count modification

    While soapUI Pro supports the following, in addition to those listed above:

  • Grid: Defines a custom variation of thread count
  • Script: Lets a groovy script control the number of threads
  • Fixed-Rate: Execute a TestCase at a fixed rate

In the examples below, we'll look at the Simple and Burst strategies. However, in addition to choosing a strategy, we'll also need to define load test limits. A limit consists of two variables: the limit - how long to run, and the limit type - elapsed time or number of test cases executed. Setting the limit to zero with either limit type will run the test indefinitely. 

Author's note:

This article was written using soapUI 3.0.1 and uses the Atlassian JIRA SOAP web service as an example application. This web services is used for example purposes only. If you're following along using this article, try the steps on your own web service if it's available.

Creating and running a Simple load test

If we load the project from our previous article in this series, we'll see that we have a TestSuite setup with a single TestCase, which tests login and logout functionality for the JIRA web service. In that TestCase, we have three TestSteps: Login, a property transfer of the session id, and Logout. In figure 1 below, I show the summary for the executed test case, and because it's an article on performance testing, notice the response times for each test step under single user load. 

The round trip time is 369 milliseconds, which seems fairly fast to me since I know my test JIRA instance is hosted on a virtual machine on the other side of the country.

At this point, we're ready to create a simple load test. To do that, we'll click on the LoadTest button to create a LoadTest from your TestCase window.

 

This should open the New LoadTest dialog. Name your load test and click OK. 

You'll see that this creates a load test in the project navigation tree on the left-hand side. And soapUI should also open the LoadTest window for the load test you just crated. The LoadTest window can be a little intimidating the first time you look at it, so dissect it a bit to see what we're looking at. 

At the top of the window, you'll see the toolbar. You can hover over each element to get a tooltip of what that button does, but by default, moving from right to left you should have: run your load test, stop the load test, display statistics graph, display statistical history, reset statistics, export statistics, options, and help.

Next to the toolbar, you'll see where you set the load test limits. Under the toolbar, you'll see where you set the number of threads in the thread-count and your load test strategy. Since the current load test strategy is set to Simple, you'll see Test Delay and Random to the right.

For the Simple strategy, Test Delay is the delay between test case runs in milliseconds, and Random is how much to soapUI should randomize that delay. So in the example show above, the delay could be from half a second - 500 ms - to a full second (1000 ms). If you set Random to zero, you'll get the full one second delay each time.

The rest of the window displays execution data. In the table, you get step-by-step results. And the bottom panel shows your execution log. If I run the Simple scenario that gets created by default, I get the results shown in Figure 5 below. 

For each transaction above, you can see response timings in milliseconds. If you look at the average response times, you'll see that they're only a little bit higher than the single user response times. For example, in the single user test above, login was 77 milliseconds and in our load test it averaged 92 milliseconds. Similarly, the single user logout time was 40 milliseconds and it averaged 41 milliseconds in our test. This feels correct to me since our load test is low load – I wouldn't expect the times to be much higher.

If you look at the statistics graph for the test run, shown in figure 6 below, you'll see that most of the load was early in the test and it wound down as the test progressed. 

If I increase the number of threads to 10 and allow the test to run for five minutes, you can see in the statistics graph shown in figure 7 that I'm able to get the average response times and bytes per second to level out for a period of time. For me, this is an indicator that the response times are I see in the summary table are more representative of what response times will actually look like under similar load. 

Response times for this five minute and ten thread run were right in line with single user response times. More important, they were consistent over the length of the run. Without looking at the JIRA server to see how it performed, from an end user perspective it looks like we're not really stressing the system at this load.

At this point, assuming our load test were representative of a load model we cared about, we could keep ramping up load and model performance degradation. See the Next Steps section at the end of this article for more on using performance degradation curves.

Creating and running a Burst load test

To create a load test using a burst strategy, you can either change the load test you're working with, or you can create a new one using the same steps we used earlier in this article. Once you have the new load test, when you select the Burst strategy you'll see that the two fields to the right of the Strategy field change to Burst Delay and Burst Duration. Burst Delay represents the delay between bursts and Burst Duration is, well, the burst duration.

Since you use the Burst delay to simulate sporadic load so you can monitor behavior during the recovery period between bursts, I'll set my initial test to fifty threads, running for five minutes, bursting every minute for ten seconds. Figure 8 shows the settings in soapUI. 

If you look at the results for this test, show in figures 9 and 10 below, you'll see that response times go up significantly – roughly ten times – and that the load pattern on the statistics chart shows a clear pattern of bursts. 

You might be wondering why you don't see the pauses between bursts on the statistics chart. That's because our test doesn't have any transactions that run between the bursts. To simulate that, we'll run the two load tests together – both the Simple and the Burst load tests. Without doing anything fancy, you can do that just by starting them both up at the same time.

 

Figure 11 below shows a graphic result of the Simple load test run in conjunction with the Burst load test. You can see that every 60 seconds when the burst runs, response times for the steady-state users go up and then eventually come back down. 

Looking at the average response times, while they're higher – about twice as high – they're still in line with what you'd expect with max response times going up to four seconds during bursts.

Next Steps

At this point we've successfully setup and run some first performance tests for our JIRA service. For more on using performance degradation curves, take a look at the following article on analyzing performance-testing results to correlate performance plateaus and stress areas.

For more on soapUI, its features and what the soapUI team is working on, checkout the soapUI website. For other articles and blog posts on soapUI, the soapUI team also maintains an "in the news" listing on their website. And for more on JIRA, it's interface and what it does, checkout the Atlassian product website.


Mike Kelly, software testerMichael Kelly is currently an independent software development consultant and trainer. Mike also writes and speaks about topics in software testing. He is a regular contributor to SearchSoftwareQuality.com and a past president for the Association for Software Testing. You can find most of his articles and his blog on his Web site www.MichaelDKelly.com.

This was first published in January 2010

Dig deeper on Stress, Load and Software Performance Testing

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSOA

TheServerSide

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close