In this tip we'll take a look at some techniques for debugging your JMeter tests. In a previous tip on "Running your first load test with JMeter," we looked at getting JMeter set up, and we ran a simple test against Yahoo Search. If you're new to JMeter, you'll quickly discover that once you get past very simple HTTP requests, you'll need to be able to get visibility into what's happening at runtime so you can debug and tune your tests....
For the examples we'll look at in this article, we'll be using the Yahoo Example Test Plan we created in the first tip in this series.
Use listeners to see what's happening during execution
As we saw in the previous tip when looking at the Graph Results and View Results listeners, listeners are what you use to access to the information JMeter gathers while running. There are a number of listeners available, and while in the previous article we looked at two listeners that are handy for results reporting, in this article we'll be focused on a couple of listeners more appropriate for debugging.
While different listeners are useful in different situations, a common listener that I use when debugging a JMeter test is the Assertion Results listener. The Assertion Results listener logs assertion requests and reports failures of any Assertions within your test plan. When I'm creating a Test Plan, I try to add an assertion to each step for just this reason. Assertions are critical to the debugging process, and even if you take them out later, they let you know your test is doing exactly what you expected it to do when you were creating it.
For example, in the following figure, you can see successful assertions added to the Yahoo Example.
Each HTTP Request log message shown above represents a successful assertion from my Test Plan. In the figure below, you can see what an error looks like.
In this case, JMeter shows not on the HTTP Request, which indicates an assertion was attempted, but also shows that the assertion failed and why it failed. Using the checkboxes at the top of the screen, you can filter this to display only errors if you like.
Using the View Results Tree listener
There's one particular listener that's a bit more powerful than the others when it comes to debugging: the View Results Tree listener. The View Results Tree shows a tree of all sample responses, allowing you to drill down into the response for any sample. Example data provided for HTTP requests includes thread runtime information, response headers, the request and response data, and response assertion results if there was an assertion associated with the request.
The following two figures show examples of response data shown in text and rendered HTML format for the Yahoo Example Test Plan:
From this listener you can not only see the details of the request sent, but you can see all the details of the response that came back. This is important, because if your assertions are off by as much as a single space they will be wrong. Many times I end up going to the text response data and copying the results there that I'm looking for. In addition, if you're testing web services and need to do XPath queries, the rendered XML view of the response data is quite helpful.
Checkout the JMeter Component Reference for a bit more detail on some of the subtle nuances of using the View Results Tree.
Using Debug Sampler and Debug PostProcessor
JMeter has two built in debugging elements: Debug Sampler and Debug PostProcessor. Think of these two elements as helpers for providing more information in the View Results Tree listener.
The Debug Sampler generates a sample containing the values of all the JMeter variables and properties at runtime. If you use the Debug Sampler in conjunction with a View Results Tree listener, you'll be able to navigate those values as shown below.
The Debug PostProcessor on the other hand, creates a "subSample" with the details of the previous sampler properties. The Debug PostProcessor shows additional details about the request performed as shown in the figure below:
Both of these can be handy for figuring out what's happening under load. I know that when I'm looking at the "standard" results shown in the View Results Tree listener, I'm often focused on debugging a single transaction. However, when I switch to looking at the data in the Debug Sampler and Debug PostProcessor, I'm often trying to figure out what's happening under load.
Save your response data to a file
By default, listeners display information graphically in the JMeter user interface. This is a low-information density medium, and one you'll quickly need to move past to get at the details of your test results. One of the first things you'll need to figure out when using JMeter is how to write to files.
As you can see in the two figures above, listeners in JMeter have a "Write results to file" field. If you provide a path to file, you can capture detailed information in either XML or CSV format. To configure what JMeter logs for a listener, click on the Configure button next to the file name.
If you save your results in XML, the output will look similar to the following:
By contrast, if you're looking for something that you can easily manipulate in Excel, the CSV format shown below might be more appropriate:
Each format has it's own advantages, but I find myself using CSV more than XML. Each listener saves it's own configuration settings separately. So be careful you don't assume if you change the settings for one listener that it will cascade to all of them. It won't. This allows you to tune different listeners to collect different information.
Be sure your load generator has enough resources
Another common issue when load testing is figuring out if your load generators have enough resources for the testing required. If your load generation machine(s) can't adequately accommodate the demands your JMeter test is putting on them, you'll skew your execution results.
One place to start is to monitor your load generators while your tests are running. If you're using Windows, this likely means just running Perfmon. If you're on another platform, use the Perfmon equivalent for that operating system - they all have one. If you start to notice performance issues on the load generator, here are some things to look at:
- Threading: The more threads your load test uses, the harder your load generator will work. This is likely the biggest factor to performance. If you need large-scale load testing, consider running your test across multiple machines. You know your load generator is over burdened when you're pinging the CPU or when you're pushing the upper limits of your memory capacity.
- The GUI: JMeter has a "headless" mode that allows you to run without the GUI. Rendering all those pretty charts taxes the system. To run without the GUI, use jmeter -n -t test.jmx test.jtl from the command line.
- The number of Listeners: Just like the GUI, it takes processing power to populate listeners. Even if you run without the GUI, you're still collection listener data. This is the main reason I consider the View Results Tree and Assertion Response listeners debug tools instead of just something you'd always use. That have overhead. You can either remove them from your test, or if you're running from the command line, add a '-l' argument before the test plan name.
In future tips, we'll look a bit more at some other ways you can use JMeter in your load testing. These tips will include recording tests and tips for making sense of all the results. In the meantime, for more on JMeter, you can always checkout the project website at jakarta.apache.org.
Michael Kelly is currently an independent software development consultant and trainer. Mike also writes and speaks about topics in software testing. He is a regular contributor to SearchSoftwareQuality.com and a past president for the Association for Software Testing. You can find most of his articles and his blog on his Web site www.MichaelDKelly.com.