Manage Learn to apply best practices and optimize your operations.

Using SBTM for exploratory testing coverage problems

Learn how to make software testing progress more visible using session-based test management (SBTM), thus improving exploratory testing processes. This article is the second in a series on session-based test management.

Michael Kelly
Michael Kelly
Session-based test management (SBTM) can give test managers greater control in exploratory testing. In this article, we'll look at how test managers can better handle test execution, focusing on the metrics gleaned from the process and how those metrics can help us report testing status by providing increased visibility into the work.

This article is the second in a series on session-based test management. In the previous article on managing test execution using session-based test management we got an overview of the how session-based test management works. We took a look at how the techniques of overproduction, abandonment and recovery can be used while generating and managing test charters.

When I'm managing a project using session-based test management, I regularly use the following metrics:

  • Charter velocity
  • Level of coverage achieved
  • Features/areas/risks covered
  • Average session execution time
  • Percent of test execution complete

Charter velocity
Velocity is the key metric I use to track day-to-day work and to predict when my testing will be done. On a daily basis, I look at how many charters the team is creating and how many charters a day the team is executing. These measures can be based on charters per day, per iteration, per tester, per charter priority (explained in more detail below), or per area (also explained in more detail below).

For a basic look at charter velocity, let's look at the following data from my first nine-days of testing on a hypothetical project:

Charters created Charters executed Remaining charters
Day 1 10 9 20
Day 2 8 8 21
Day 3 9 7 21
Day 4 6 8 23
Day 5 5 7 21
Day 6 7 9 19
Day 7 4 6 17
Day 8 5 7 15
Day 9 3 8 13

On this project, we start with 20 charters from our initial look at the testing, and on our first day we discover 10 new charters we hadn't thought of initially and execute nine charters out of our pool. That gives us 21 charters on the start of the second day. That process of creating new charters and running charters out of the pool continues for the next few days.

Two patterns typically emerge from this type of data. First, you'll likely find that over time the number of new charters you create each day starts to go down. Second, you'll notice that you tend to average around the same number of charters a day as a team (give or take a few depending on what else is going on within the project).

If you were to chart the data, it might look something like the following:

At this point, I might add a tendline to help me predict when I might be finished with my testing:

As you can see from the graph above, based on my testing to date, I might by finished with my testing as early as three days out. On large projects, with a lot of measures of charters by area, priority, or some other criteria, I've found simple charts like these to be predictive of what the team will actually do. It's normally not correct down to the exact day, but it's normally within the week (for small to medium sized projects).

Level of coverage achieved
As I outlined in the first article in the series, when I create my charters I prioritize them into three levels:

  • A - We need to run this charter.
  • B - We should run this charter if we have time.
  • C - We could run this charter, but there are likely better uses of our time.

I did this to allow me to easily map my charter coverage to the coverage metric James Bach outlines in his low-tech testing dashboard. In that dashboard, James provides four levels of coverage:

  • 0 - We have no good information about this area.
  • 1 - We've done a sanity check of major functions using simple data.
  • 2 - We've touched all functions with common and critical data.
  • 3 - We've looked at corner cases using strong data, state, error, or stress testing.

This gives me the ability to do a direct mapping of charters to coverage. When my level A charters are done, we've completed our basic sanity tests. When our level B charters are finished, we've hit all the common cases we could think of. Same for level C. What's interesting about this, is that if we're at level 2 or 3 coverage, as soon as one of the testers identifies another level A charter we go back to level 1. It means we missed something - likely something big.

Features/areas/risks covered
As with any testing effort, with session-based test management I'm always watching what we're testing. I look at the number of charters by feature or by area of the application. I'm trying to answer questions like:

  • Do we have at least one charter per story (or requirement, depending on your methodology)?
  • Do the areas of the application that are historically more complicated or more error prone have more charters than those areas that are easier or more stable?
  • Are there certain areas or risks where we need a high level of coverage (level 3 coverage or priority C charters)? Do we have that coverage planned or executed?

Keeping an eye on coverage from multiple perspectives (story vs. area vs. risk) can help make sure you're getting a good balance. For example, if I'm only looking at coverage for my current stories or requirements, then while I might have great requirements coverage, I might miss areas that need regression testing. If I'm only looking at areas by feature set, then I might miss testing for performance, security, or some other quality criteria. In general, I try to get at least two different views on coverage per project.

When thinking about how to break up an application by area or risk, I'll often start with subsystems and work out from there. For example, if you look at an e-commerce site, you'd have something like: search, item browsing, shopping cart, checkout, email and messaging, order lookup and tracking, account administration, and help documentation. You might also include performance, security, internationalization, and usability. For a given iteration, you might track coverage across those areas, but then using a separate view of the data, also look at coverage by feature or story for that iteration.

Average session execution time
One of the things I try to measure when running a project using session-based test management is how long it takes us to run our charters. If you remember from the first article in this series, each session is time-boxed (typically 45 to 60 minutes). Once you have this information, you can then sort and filter the data to better understand how much time you're team is spending executing tests by functional area, by feature or story, by type of testing (functional, performance, security, etc…), or by tester.

Capturing this metric gives me feedback:

  • It tells me how good we are at estimating when we do our initial chartering. With this information, I know when I need to work with the team, or individuals on the team, to help them improve either their time estimates, or help them better manage the scope of their charter missions.
  • It tells me how much time we're spending on specific areas or features of the application. With this information, I can better manage where we are spending our time to ensure the most important areas of the application are getting the most coverage. It can also be an indicator of which areas of the application are more difficult to test than others. That can be useful in future planning and training.
  • It tells me how much time we're spending on specific types of testing. With this information I can better understand how much time we spend testing various quality criteria and work with the team to make sure when we charter our work we're giving proper attention to areas like usability, security, performance, or supportability - areas we might be ignoring without being aware of it.

Percent of test execution complete
A big aspect of session-based test management is that testers have the freedom to add and remove charters as needed to be successful. That means, one day you might have 20 more charters to execute until you're finished. Depending on how your testing goes, the next day you might have 25 or 15. My experience tells me that many project managers are uncomfortable with that idea. Most project managers want a predictable, always going up - never going down, measure of percent complete.

The measure of percent of test execution complete is the number of charters you've executed, divided by the total number of charters you have for the interval you're measuring. While you likely won't get a nice predictable increase day after day like you might get on projects where all the test design is done upfront, there is value in measuring your percent complete by iteration or release. I don't use percent complete to predict when I'll be done (I use velocity for that), but I will use it to help me remain focused on the end goal. It's one macro-level measure of when our testing might be complete.

Detailed session metrics
In his article How to Manage and Measure Exploratory, Jon Bach outlines some other session metrics he commonly uses:

  • Percentage of session time spent setting up for testing
  • Percentage of session time spent testing
  • Percentage of session time spent investigating problems

The capturing of detailed session metrics like those Jon outlines in that article is quite common in the session-based test management community. In the article, Jon outlines the details of setup, testing, and problem investigation.

Test setup measures the time it takes to get ready to run the first test. Test execution and design measures how much time is spent thinking of tests conditions and running those tests. And bug Investigation and reporting looks at the time that gets spent researching issues identified and logging them in your defect-tracking tool.

These measures help tell a different story about your testing. For example, knowing how long your testers are spending on setup can be helpful in letting you know when you might need to pay more attention to automating setup tasks, focusing on making data more available, or providing training. And knowing how much testing is done per charter is useful in helping out understand how much coverage you actually got out of a session. If someone ran a 60-minute charter, but only did ten minutes of design and execution, you might take some extra time to ask if they really fulfilled their mission. Did they loose too much time during setup? Or did they get sidetracked investigating a specific issue?

Getting visibility into the testing project
Once you feel like you have the visibility you need to effectively manage the project, the next step is to figure out what you need to do to successfully integrate session-based test management into your development methodology. That often means figuring out how to use your metrics to convince others you're providing them with the data they need to make good decisions. In the next article, we look at some techniques for integrating session-based test management into some different methodologies.

In the meantime, if you haven't already, take the time to read How to Manage and Measure Exploratory by Jon Bach. This will help you develop a more low-level appreciation for the detailed session metrics and how they help you understand what might be happening when your team tests. In addition, if you're not already familiar with it, I recommend taking a look at James Bach's low-tech testing dashboard. It tends to go hand-in-hand with session-based test management, and I've yet to meet a team that hasn't liked its simplicity and clarity.

This was last published in May 2009

Dig Deeper on Software Security Test Best Practices



Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.