News Stay informed about the latest enterprise technology news and product updates.

CAST 2009: Taking a closer look at scenario testing, an interview with Fiona Charles

Fiona Charles will share the details of her scenario testing method at this year’s Conference for the Association for Software Testing (CAST), which takes place July 13-16th in Colorado Springs. Charles has used this approach on several projects where test scenarios were designed based on models derived from system data. I recently had the opportunity to talk with Charles about her upcoming CAST presentation, titled “Modeling Scenarios with a Framework Based On Data.”

Charles teaches organizations to match their software testing to their business risks and opportunities. With 30 years experience in software development and integration projects, she has managed testing and consulted on testing on many projects for clients in retail, banking, financial services, health care, and telecommunications.

When asked where the talk came from, Charles said that she’s not seen very much written about scenario testing and that she believes it’s an important way to test in certain circumstances.

For the talk I searched online and in the books in my own testing library, but didn’t find much. I’ve cited the two really useful articles that anyone contemplating scenario testing should read. Cem Kaner’s article on scenario testing is an excellent general introduction to the topic, and Hans Buwalda’s Better Software article on soap opera testing describes one way to model a scenario test

She went on to describe the specific projects that formed the basis for her presentation, saying:

I originally developed this method of designing scenarios for large-scale systems integration tests. The first one I did was for a retail project where we were building a new customer rewards system and integrating it with both the in-store systems and the enterprise corporate systems. My role was to conduct a black-box functional test of the integrated systems, after each team had tested its own system and immediately before we went live with a pilot. It seemed obvious to me that I needed to build the test around the integrated data: the flows and the frequency with which different types of data changed. There were no context diagrams or dataflow diagrams for the integration, so I began by developing one, which I then used to model the integrated test and spec out the scenarios to run it.

The next project was another retail integration, this time integrating a new custom-built e-store and warehouse management system into a large and complicated suite of enterprise systems. I wanted to extend the method I’d used before by introducing more structure so we could construct scenarios from reusable building blocks. Also, I couldn’t be sure my team would be able to complete the transactions planned for a given day, and it was essential to be able to say exactly what should be expected in the downstream systems, where the data outcomes might take several days to appear. So I was especially keen to have an easy way of generating “dynamic” expected results for my core team to communicate to the extended team, which consisted of people evaluating each store-day’s test results in their own systems downstream. I was lucky to have on my team a programmer/tester who understood what I wanted to do and was committed to implementing it.

He and I worked together over the course of several projects, sharing ideas and spurring each other to build on and extend how we tested integrated systems. I’m a modeler by temperament and preference, and over time I began to abstract reusable models to have a way to talk about what we were doing. One of them I think of as a conceptual framework for building scenarios based on system data: the topic for this talk. The first time I applied it to testing a standalone system was for the project I’m using as the example for the talk, testing a customized Point of Sale (POS) system.

When asked why she focused on data as a basis for scenario testing, instead of focusing her test development around more traditional business requirements or the same source documents used by the programmers, Charles elaborated on some of the challenges seen with using that as the only approach.

I’ve never seen documented requirements that came near adequately describing what a system was intended to do. And even when use cases cover most of the intended interactions, they’re usually much too high-level to build tests from. Paradoxically, I find that on the one hand the sources used by the programmers are too high-level, as in use cases, and on the other hand, they mostly don’t take a large enough view of how the whole system operates, either in itself or in the context of the systems it’s integrated with.

I once managed the testing on a bank teller system where a significant downstream output was to the bank’s General Ledger system. Yet the architect’s context drawing didn’t show the GL—and neither he nor any of the programmers could tell us what we needed to know about it to model the test.

That’s one reason I think we need to build our own models for testing, typically based in some way on how we expect a system to be used and the outcomes we hypothesize. That could be a state transition model, one of many other kinds of models, or a combination. The essential thing is to model a test consciously taking a fresh approach, and not blindly accept what we’re given.

I asked Charles to share a bit about her impression of the AST conference. This was her second year being involved in the conference, and I was curious why she choose CAST as the venue for her message. “My hope,” said Charles, “is that CAST primarily attracts testers who are open-minded and curious, not wedded to “traditional” (and to me boring and suboptimal) ways of testing systems. Those are the testers I most want to talk to and learn from.”

One of the key aspects of CAST is that after a topic is presented, debate on issues related to the topic is encouraged. The AST community believes that through dialog and struggling with difficult problems you get better solutions. So I asked Fiona Charles what she thought a likely criticism of her presentation might be.

One criticism I might expect is that this is a structured method with built-in expected results, analogous to scripted testing. I can see an audience of mainly exploratory testers possibly having issues with this. My answer would be that this is one kind of testing, appropriate in contexts where we have to have strictly controlled dynamic data in order to evaluate the aggregated outcomes. That’s not a good context for exploratory testing. It doesn’t preclude it entirely, but exploratory tests do have to be back-fed daily into the expected outcomes.

And finally, when I asked what topics currently had her excited, Charles offered the following:

I am very interested in agile and the practical integration of real testing (rather than mere confirmation) into agile projects. But I have a concern that in the rush to small teams and small projects (which I think is mainly a good thing), larger integration issues are being ignored. Those could become important in the next while. I’m also interested at the moment in how we can get our hands around testing for risks that really matter to businesses. I’m exploring that in the tutorial I’m doing at CAST 2009 with Michael Bolton and in tutorials I’m doing elsewhere, and I’m also writing about it.

For more on the upcoming show, check out the CAST conference website. You can learn more about Fiona Charles on her website, where you can also read her articles on testing and test management.

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchMicroservices

TheServerSide.com

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchHRSoftware

SearchHealthIT

DevOpsAgenda

Close