I am a test engineer working on Windows applications. I do manual testing. In the agile world, when things keep...
on changing every now and then, it's difficult to get well-defined requirement specifications from developers, which disturbs test case writing, traceability, and test execution. What would be the right strategy to make sure we are delivering quality product?
Similarly, Lessons Learned in Software Testing has many tips and tricks for dealing with just that problem. If you're not familiar with those two books, I highly recommend them. Aside from those deeper views on the topic, I might share the following.
I've never worked anywhere where the requirements were well defined and/or locked down. I don't really know what it would look like if I did. Some of the project teams I worked with thought they had well-defined requirements, but once development and testing started, that often turned out not to be the case. My perspective has never been to operate under the assumption that I would get requirements that I could trust as the sole oracle for my testing.
A couple of years ago, I wrote an article on heuristic test oracles. The HICCUPP heuristic I refer to in the article comes from James Bach and Michael Bolton. You'll notice that requirements in the classic sense is only one of the oracles listed (it's one aspect of claims made about the product). Having multiple oracles to support your testing is one effective way to deal with dynamic requirements. As your dependency on requirements goes down, your ability to dynamically change your testing to support the project needs goes up.
In addition to finding and using different oracles, you can also take steps to reduce the overhead of maintaining artifacts that trace back to requirements. One way I do that is by using charters instead of detailed scripted test cases. When I write my charters, I define a brief mission for the testing session, outline the desired coverage for the session (what needs to be tested) and outline the risks I'll be looking for while testing.
If I have requirements, many times I'll list the use cases, specifications, or stories that I'm trying to cover as part of my charter. Since my charter is only a list of goals, and not actual steps, the impact of changing requirements is greatly reduced by using this method. If traceability is an issue, I've used several tools in the past to trace my charters to requirements.
When it comes to test execution, I always work for the source when it comes to requirements so I'm sure I'm always getting the "latest and greatest." That means I only ever reference requirements in my charters -- I never write down what the requirement is. If I feel I need to write something down to help keep everything straight in my head, I might summarize a group of requirements with a simple sentence.
When executing my tests, if the requirement changed from the time I chartered my testing to the time I executed it, then I change my testing on the fly. If the requirement has changed too much from the original requirement and no longer fits with my mission for the test session, I'll simply note that I didn't test it and create a new charter for that requirement after I'm done testing.
Dig Deeper on Software Test Design and Planning
Related Q&A from Mike Kelly
Every software tool is individually designed to meet various needs and requirements of projects, teams and project managers. Learn what tools experts... Continue Reading
There are multiple ways performance testing can be handled on an Agile team. An expert describes the benefits of various approaches. Continue Reading
Creating user acceptance tests out of basic software requirements documents can be a daunting task. Expert Mike Kelly points out logical approaches ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.