By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Session-based test management (SBTM) is a technique for managing exploratory testing. Two of the major criticisms of exploratory testing are that it's hard to make progress visible, and it's hard to know what kind of coverage you might have after exploratory testing has been completed. Session-based test management is one answer to those problems, since it provides a metric for measuring progress (sessions) and takes coverage into account.
One of the most difficult aspects of software testing is coming up with good test ideas. It doesn't matter how you're doing your testing: scripted vs. exploratory, manual vs. automated, or performance vs. functional. Solid test idea generation is still one of the biggest problems. It's not enough to be able to follow the technology and understand the business context for the problem you're trying to solve; you need to be able to turn that understanding into questions that you can ask the software in the form of tests.
The quality of those questions, or tests, is often one of the key determining factors in how good your testing is ultimately going to be. If you aren't asking the right questions, or running the right tests, then it likely won't matter how much testing you do. Session-based test management introduces the idea of a test charter to help testers better manage those test ideas, so your next test executed is always the next best test to run.
The basic work unit in session-based test management is the session, and the sessions are organized around test charters. Each session is time-boxed (typically 45 to 60 minutes) and the charter for each session often outlines the basic testing mission. So when you look at a list of 10 charters, you should see 10 distinct testing missions equating to around eight to 10 hours of heads-down testing.
Generating test ideas using session-based test management
When I generate a test charter, I start with a basic mission, like "Test adding items to the shopping cart." Once I have a mission, I start to list out all the coverage areas and risks I can think of related to that mission. For example, if I were to test adding items to a shopping cart, I might specify the following:
- Adding items
- Removing items and adding them again
- Updating existing items
- Number of different items in the cart
- Quantities of items in the cart
- Saving items for later (or adding to a wish-list)
- Adding gift wrap to an item
- Very expensive and/or very inexpensive items
- Items with very large (or small) names or descriptions
- Accuracy of information displayed about items (name, description, price, etc.)
- Accuracy of quantities displayed
- Accuracy of calculations performed (totals, savings calculations, estimates around shipping)
- Accuracy of displayed recommendations (or other adds)
- Slow performance as you add more items
- Ability to change price (hacking local cookies, URL hacking)
As you can see, as I list out the various features I can cover and the different risks I can think of related to my shopping cart, I quickly outgrow my 45-minute time box. That's by design -- that's why I list all those test ideas out in my charter.
It's at this point that I would pass my list around for a peer review. So for this example, I passed these lists to Jason Pitcher (a fellow exploratory tester) for a quick review and he pointed out that I was missing charters for session management (having multiple windows open while I add to my cart), out-of-stock items, and testing with a shopping cart linked to an account. Those are blatant oversights on my part, and should clearly be tested.
After I get the feedback from peer review, it's time to refactor. Obviously, even with the coverage and risks I came up with while writing this article, I already have more than 45 minutes of work testing my shopping cart. That means I need new charters.
After refactoring I might have the following charters:
- Test adding items to the shopping cart
- Test shopping cart calculations
- Test shopping cart recommendations
- Test shopping cart session management
- Test shopping cart behavior with an integrated account
- Performance-test adding items to the shopping cart
- Security-test adding items to the shopping cart
Then I get the fun of repeating that process for each and every one of those new charters. This process continues until each charter is no more than 45 to 60 minutes of work, and all of my test ideas (and my peer reviewer's test ideas) have been captured at least once.
Managing test ideas using session-based test management
Once I have a large listing of test charters, the next step is to prioritize them. That doesn't necessarily mean force-rank them (although if I had a small enough list I might do that). I typically use three levels for ranking:
- This charter must be executed before I know enough about the feature to make a recommendation related to production. Or more simply, "We need to run this charter."
- This charter represents a good set of test ideas, which could uncover issues that might prevent us from going to production. Or more simply, "If we have time, we should run this charter."
- This charter represents a set of test ideas that might be valuable at some point. Or more simply, "It's a test, and we could run it, but likely there are better uses of our time."
When I start my testing, I pull first from the pool of A-level charters. After each session testing, I add any new charters I can think of as a result of my testing and then reprioritize the remaining charters.
For example, after I execute any given charter I might:
- Add some number of new A-, B- or C-level charters.
- Reprioritize existing A-, B- or C-level charters (up or down).
- Make no changes.
Run a test, capture the new test ideas generated by your testing, and reprioritize your existing tests based on your new knowledge of the product. The process outlined above illustrates the exploratory testing tactics of overproduction, abandonment and recovery:
Overproduce ideas for better selection: Produce many different speculative ideas and make speculative experiments -- more than you can elaborate upon in the time you have. By capturing all the test ideas in a charter (even the C-level ideas you know you might not execute), you increase your chances that you've got all the options on the table. That increases the likelihood that the next test you run is likely one of the best tests you could be running.
Abandon ideas for faster progress: Let go of some ideas in order to focus on and make progress with others. By downgrading charters as you learn more, you're adapting your testing to focus on the most important risks and the areas that require the most coverage. You know you likely can't run all the tests you'll think of, so you put a process in place that helps you abandon ideas gracefully, so you can recover them later if needed.
Recover or reuse ideas: Revisit your old ideas, models, questions or conjectures. Discover ideas by someone else. By keeping your B- and C-level charters handy, you're ensuring that you've got them ready to be used later as needed. In addition, as you get better at chartering, you'll find that old charters become great sparks for developing test ideas in the future as well. And peer reviews are another way of saying, "discover ideas by someone else."
Getting started with session-based test management
The first time you try session-based test management, it's going to feel awkward. That's likely true if you normally do scripted testing, or even if you do exploratory testing. It's just a different way to think about the work. And most places I've seen it implemented all do it slightly differently. They all have their own unique touch.
For a more detailed look at the basics of session-based test management, take a look at Jonathan Bach's seminal article on STBM. Another great read for the beginner, which has a bit more detail, is James Lyndsay's "Adventures in Session-Based Testing." Finally, for a fantastic case study on using session-based test management, check out Bill Wood and David James' article "Applying Session-Based Testing to Medical Software," published by MDDI.
Learn how you could benefit from the Zephyr test management tool
Dig Deeper on Software Testing Methodologies