Solving problems with session-based test management

A veteran software tester gives real-life examples of using session-based test management in Scrum, RUP and in a completely ad hoc environment.

This Content Component encountered an error

Michael Kelly
Michael Kelly

 Different development methodologies place different demands on a test team. Some methodologies provide more -- or fewer -- requirements; some require close collaboration with programmers (while others encourage separation); and all of them require some level of visibility in terms of progress and coverage. For teams looking to do exploratory testing, session-based test management (SBTM) can provide a framework for helping you provide visibility into your testing, and can give you a ready vocabulary for what integration with the rest of the development process might look like.

In this article, we'll look at some examples of how you can integrate session-based test management into your software development methodology. We'll take a look at some lessons learned in using session-based test management in Scrum, RUP, and in a completely ad hoc environment.

This article is the third in a series on session-based test management. The previous article covered solving test visibility issues using SBTM.The first article looked at using session-based test management for exploratory testing, revealing some of the metrics one can use when managing testers who are doing session-based exploratory testing.

Taming the Wild West..or at least not getting shot

One of my early experiences in session-based test management took place at a company that was working completely ad hoc. By ad hoc, I mean a team that claims to be agile only to find they were using the concept as a shield to avoid discipline. The project was an interesting mix between R&D on a new product, a migration of a legacy product to a new platform, and implementing ad hoc marketing requests. This made nailing down the testing scope very difficult, and that in turn resulted in a lot of busy work for the testing team.

When I arrived on the project the testing team was organized around heavy scripted manual testing using IEEE 829. However, the development team was largely working without requirements. This meant that for the methodology the testing team was using to work, they needed to write the requirements themselves, hope they matched what the development team was coding, and then execute their tests. Shortly after I joined the team, we decided to move from IEEE 829 to exploratory testing.

Integrating session-based test management into an ad hoc process turns out to be quite easy. You are, in effect, only really changing your own process. All you have to do is identify what information people are asking you for -- defects, basic progress metrics, etc. -- and make sure you can still provide those after you've made the change. As you'll see, as the number of outputs grows, the complexity of implementing a process change goes up. This is what you would expect.

In this case, the testing team changed how they documented their testing --cutting most of the documentation -- increased their collaboration with those outside of the testing team, and continued to deliver high quality defects and metrics. The testers focused on making sure the quality of the defect reports went up, while the test manger made sure the measurement needs of the management team were met. He actually developed a great dashboard for this that I still use today.

Sprinting with sessions

On a more recent set of projects, I've had the pleasure of integrating session-based test management into Scrum. The team I've been working with initially structured the work so that development work would be completed in sprint N and test execution would take place in sprint N+1. Since sprints were two weeks in length, that meant that over a two week period you'd be chartering your tests for the stories currently being developed, and testing the features that were developed in the previous sprint.

This process worked okay, but it was not as effective as we initially thought it would be. Code that was leaving sprints was buggy. It covered basic happy paths, but not and of the edge cases. And defects we found weren't fixed in a timely manner. Because the programmers had already moved on to the next sprint and next set of stories, they didn't have time to go back and fix the code they had written in the previous sprint.

After a few months, we made the decision to consolidate programming and testing in the same sprint. With this in place, no story was "done" until all defects related to the story were fixed. This consolidation pressured the team to move faster. For this to work, programmers would need to finish stories in the first week. Testers would need to have initial chartering and setup work done before the story was completed. Defects would need to be logged early in the second week of the sprint so that programmers had time to turn them around for re-testing.

When you look at how to integrate exploratory testing into your Scrum process, you'll want to think about when it needs to be done. We decided it needed to be done with each story. Some teams do this type of testing during their transition sprints. Either way, be sure to make it clear what "done" means in regards to charters executed and defects found.

Providing traceability within RUP

The IBM Rational Unified Process (RUP) is the most common project methodology I've worked in. I've seen it implemented a number of different ways, from small waterfalls to something that represents iterative development. And I've worked on everything from the small 10-person RUP project team to the large 300-person project. One thing that's been true of every RUP project I've worked on, whether it's been in a regulated industry or not, is that people who manage RUP projects want requirements traceability.

While I don't see traceability as the holy grail of testing that some see it as, it has its place. On regulated projects it's a requirement of the process. And even on unregulated projects, it can be one of many helpful tools in tracking test coverage. For those who might not be familiar with traceability in RUP, I reference a nice primer on the topic at the end of the article.

So how, you ask, do you take an inherently chaotic practice like exploratory testing and make it traceable? Well, it should come as no surprise that session-based test management can help. In fact, I've even implemented requirements traceability on Agile teams using session-based test management.

The simplest way to do this is to outline what requirements you plan to cover in a coverage section of your charter. Then, wherever you track your charters, your add a mechanism to tie those charters to the requirements they cover. On past projects I've done this using Excel, IBM Rational TestManager, IBM Rational Quality Manager, and HP Mercury TestCenter.

Regardless of the tool, you simply need a listing of requirements tied to a listing of charters that cover them - just like traditional test scripts. Then, after you execute your testing, when you debrief your session with your test manager, you simply correct any changes in coverage. If you didn't get to a specific requirement, you un-connect that requirement and charter. If you decide to add additional coverage for a requirement, you simply link the new charters you created for that additional coverage.

My experience has been that on the projects where I do exploratory testing and track requirements traceability in this way, I get better coverage than I do with traditional testing. In traditional testing, because debriefs seldom happen, there isn't much revisiting of the requirements covered and/or the test cases that cover them. It's more about building the traceability artifact than it is about the actual testing coverage.

Next steps

In this series we've looked at how session-based test management can help with test idea generation, can offer a different perspective on managing test execution, and in this final article, looked at some examples of how you might look to integrate the practice into your project methodology. If you don't feel comfortable implementing it all at once, try just implementing debriefs or the practice of adding and removing test cases based on priority. You don't need to do it all at once. I don't always use the detailed session metrics outlined in the second article in the series. And I don't always track traceability. Figure out what works for you.

For more tips on SBTM, check out these resources:

This was first published in May 2009

Dig deeper on Software Testing Methodologies

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSOA

TheServerSide

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close