Problem solve Get help with specific problems with your technologies, process and projects.

Determining software testing deliverables for a small project

Learn how to develop a methodology for determining test deliverables depending on project size. For example, what kind of strategy is appropriate to a change request?

Our company does not have a clear definition of small, medium-sized or large projects or a good process and methodology to determine test deliverables -- for example, change requests, small enhancements, etc. I am looking for examples of size of a change smaller than a project whereby I do not need to comply with all testing deliverables dictated in the SDLC. For example, three days of testing -- do I need a test strategy, test plan, etc.? If testing off of a change request (CR), and so on.
If I was walking into your company with a fresh perspective and asked to resolve this question, this is an approach I would try. I would begin by discovering information, such as:

  1. Identify all the types of projects that the test team handles, such as change requests, new releases, conversion or upgrade efforts, etc.
  2. Identify all the forms of testing that are in practice, such as exploratory testing, scripted testing, automated testing, etc.
  3. Identify all the test documentation "typically expected" or "required" for each type of project.

I would then work to learn less tactical factors yet important influences such as:

  1. What are the expectations of the stakeholders?
  2. What has been done historically?
  3. What has worked and not worked?

In terms of test documentation, I would build a document matrix indicating what (if any) test documents make sense to create and maintain for each type of project. At the start of each project, I would share with the team and the project owners what test documentation I intend to deliver. I would then resolve any conflicting points of view with the test documentation deliverables.

In some cases, such as regulated software, a final validation report may be required and there is no discussion. In a nonregulated environment, one change request could be minor, requiring no test documentation, and another effort could be more sizeable, have more nuances to address and include product folklore that might be best documented -- even if briefly on a team wiki. There are no hard and fast rules, which is why I would build a test document matrix as a guideline, review at the start of each project and adjust accordingly.

I would step through the same process with the types of testing for each project. Communicate with the team what you expect to provide. Resolve conflicts if needed and also adjust throughout the project as needed -- keep communicating so that any changes made are understood. I expect change to take place on projects, as few things in life remain static.

For example, you might start out on a project believing that you can build test automation and discover that for some reason, including staff changes, that automation cannot be built. Sometimes the information you have at the start of project is not enough to plan, so it becomes important to adapt as needed.

In terms of test strategy, all efforts require a strategy -- if you view a strategy from the following perspective: A strategy resolves the questions around how I'm going to achieve something tactically and specifically. To me a test strategy is about answering: How am I going to get this effort accomplished? What resources do I need? What constraints do I have and what are possible solutions?

The question I think you're asking is how detailed should a test strategy be and does a test strategy need to be formally written and signed off. If your experience with test strategies is that they are long, weighty documents, I would suspect you would not want to write and deliver one unless the project effort was major in scope. I write strategies frequently and have written strategies as short as a page in length. I use strategies to communicate to project team members as much as for any other purpose. So what you mean specifically by "is a strategy required" depends on what a strategy is to you. To me, strategic planning for everything I work on makes sense, and I only build a document as large as it needs to be and as formal as the environment requires.

In terms of test plans, again the scope influences the decision. If a change request is simple, I may write my retest notes within the same defect tracking system. If a CR is used for an effort that is larger than a single CR might imply -- for example, if a CR was written for upgrade database server -- I might find writing a full test plan makes more sense for that CR.

I think you're looking for some general guidelines to apply, but the type of product, the type of environment -- regulated, nonregulated, and the expectations that exist within the current company culture -- are considerable variables. You might begin by a review of the past to see what's been effective as you map out what you believe will work for the future. Over time you can build history and learn from experiences -- and chances are when you change jobs or even as the company culture shifts over time, you'll likely find you have to change again. I would emphasize to build test documentation to the degree that it's helpful and practical. And I would build out testing to the extent that it makes sense for each project, reviewing each project small, medium and large as the work becomes available for you and your team.

Dig Deeper on Topics Archive

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.