In this tip we finish our series on using JMeter. The series includes three previous articles: Running your first...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
load test with JMeter, Tips for debugging your JMeter tests and Recording load tests using JMeter. In this tip, we look at how to make sense of all the results you see when you run your tests. There's a lot of data available to you across the different JMeter Listeners, and it can be difficult to figure out which sets of data answer which application performance questions.
JMeter is not a browser
At this point in the series, you likely know this already but I'll say it again: JMeter is not a browser. This is important because most performance testing tools your average tester is exposed to either emulates or drives a web browser. That has implications for when you look at the test results. If a virtual user in a traditional tool experiences a five second page load time, you can be reasonably sure (within some defined percentile) that if you open the same page in a browser, under the same workload conditions, you'll experience something close to a five second load time.
All of this can make it difficult for testers more familiar with traditional testing tools to make an effective shift to JMeter. Instead of capturing a statistical measure of user experience, JMeter is looking instead at transaction response-times divorced from the human end user. Or, in other words, it's more transaction focused than it is activity focused.
Listening for data
The impact this has on interpreting results is that the tester now needs to think a bit more about how they structure their tests and how that influences what questions they'll be able to answer with their results. If you need to measure elements like CSS, JS, or image files, you'll need to explicitly ask for them. Also, if you're interested in correlating results, you'll want to think about that upfront and make sure that the test you're thinking of building will actually provide the results you're looking for.
The "results" of a JMeter load test come in the form of Listeners. Each of the various Listeners JMeter provides can be viewed visually at runtime or can be configured to write results to an XML or CSV file. As discussed in the tip on debugging, because visually looking at your listeners can tax your load generators, often load tests results are written to file and evaluated after the test run. Visual listeners are often used for low-load tests or for debugging.
JMeter has some Listeners that I find particularly well suited for the types of testing I use it for. While we looked at several Listeners in the first three articles of the series, here we take a closer look at how we can pull some meaning from a smaller set of more focused Listeners.
Using scatter charts
One of my favorite uses of JMeter data is to use it to build scatter charts representing the test run. You can do this within the tool using the Graph Results Listener (as shown below in figure 1), or you can do this by using your saved data and creating your own scatter charts in Excel.
Figure 1: Graph Results Listener showing a scatter chart of transactions with trendlines.
For tips on how to use scatter charts to make sense of your performance test results, I recommend two sources. The first is Scott Barber's Beyond Performance Testing Part 6: Interpreting Scatter Charts. The second is an article I did a couple of years ago called Using Scatter Charts to Recognize Patterns in Performance Test Data.
Just the numbers
While using scatter charts satisfies my need for visualizing my performance test and allows me to recognize patters, sometimes you just want to look at the numbers. For that, you'll likely want to start with the Aggregate Report Listener. The Aggregate Report totals the response information for each request in your test and provides statistical information on those requests.
Figure 2: Aggregate Report Listener example (from the jakarta.apache.org JMeter User Manual).
All times shown are in milliseconds. Many of the columns shown in figure 2 above are fairly self explanatory, but let's look at just a couple of them for clarification:
- Median: This is the middle of the set of results. The JMeter User Manual does a good job of pointing out that this means that "[half] the samples took no more than this time; the remainder took at least as long."
- 90% Line: This is the 90th percentile. That means that 90% of all samples captured took the indicated time or better.
- Throughput: In JMeter, Throughput is measured in requests per second/minute/hour. While in the above example it shows seconds as the unit, the time unit is chosen so that the displayed rate is at least 1.0. (So sometimes you'll see minutes or hours displayed there.)
Building on the Aggregate Report Listener, you can also use an Aggregate Graph Listener. This Listener allows you to create simple bar graphs of the data shown in the Aggregate Report Listener. If you're saving your data to file while running your tests, you'll likely just end up doing this in Excel.
It's also worth noting that for a version of the Aggregate Report Listener that uses less memory you'll want to check out the Summary Report Listener. Aside from its smaller memory footprint, the biggest difference is that the Summary Report Listener adds a measure for standard deviation and removes the measure for the 90th percentile. For some great tips on how to use these aggregate numbers, take a look at Performance Testing Plus: Do the Math! by Scott Barber.
Displaying server status
Finally, another great Listener for teasing out meaningful results is the Monitor Results Listener. While this Listener was designed for Tomcat, you can use it to view any servlet container that can port the status servlet. This Listener gives you visibility into the runtime of your server(s). In figures 3 and 4 below, you'll see some examples from the jakarta.apache.org JMeter User Manual.
Figure 3: Looking at server health using the Monitor Results Listener.
Figure 4: Looking at server performance using the Monitor Results.
If you refer back to the two articles on using scatter charts (referenced above), you'll see that this is exactly the type of data that's useful when trying to identify patterns. Taking data like this and overlaying it with your scatter chart can help you correlate results. This information can also tell a story on its own, often leading you to identifying memory or processor bottlenecks.
For a detailed description of how to use the Monitor Results Listener, please refer to Building a Monitor Test Plan.
Editor's note: This article was written using JMeter 2.3.4 running on Java 1.5.0_20.
Dig Deeper on Software Regression Testing