- Identify all the types of projects that the test team handles, such as change requests, new releases, conversion or upgrade efforts, etc.
- Identify all the forms of testing that are in practice, such as exploratory testing, scripted testing, automated testing, etc.
- Identify all the test documentation "typically expected" or "required" for each type of project.
I would then work to learn less tactical factors yet important influences such as:
- What are the expectations of the stakeholders?
- What has been done historically?
- What has worked and not worked?
In terms of test documentation, I would build a document matrix indicating what (if any) test documents make sense to create and maintain for each type of project. At the start of each project, I would share with the team and the project owners what test documentation I intend to deliver. I would then resolve any conflicting points of view with the test documentation deliverables.
In some cases, such as regulated software, a final validation report may be required and there is no discussion. In a nonregulated environment, one change request could be minor, requiring no test documentation, and another effort could be more sizeable, have more nuances to address and include product folklore that might be best documented -- even if briefly on a team wiki. There are no hard and fast rules, which is why I would build a test document matrix as a guideline, review at the start of each project and adjust accordingly.
I would step through the same process with the types of testing for each project. Communicate with the team what you expect to provide. Resolve conflicts if needed and also adjust throughout the project as needed -- keep communicating so that any changes made are understood. I expect change to take place on projects, as few things in life remain static.
For example, you might start out on a project believing that you can build test automation and discover that for some reason, including staff changes, that automation cannot be built. Sometimes the information you have at the start of project is not enough to plan, so it becomes important to adapt as needed.
In terms of test strategy, all efforts require a strategy -- if you view a strategy from the following perspective: A strategy resolves the questions around how I'm going to achieve something tactically and specifically. To me a test strategy is about answering: How am I going to get this effort accomplished? What resources do I need? What constraints do I have and what are possible solutions?
The question I think you're asking is how detailed should a test strategy be and does a test strategy need to be formally written and signed off. If your experience with test strategies is that they are long, weighty documents, I would suspect you would not want to write and deliver one unless the project effort was major in scope. I write strategies frequently and have written strategies as short as a page in length. I use strategies to communicate to project team members as much as for any other purpose. So what you mean specifically by "is a strategy required" depends on what a strategy is to you. To me, strategic planning for everything I work on makes sense, and I only build a document as large as it needs to be and as formal as the environment requires.
In terms of test plans, again the scope influences the decision. If a change request is simple, I may write my retest notes within the same defect tracking system. If a CR is used for an effort that is larger than a single CR might imply -- for example, if a CR was written for upgrade database server -- I might find writing a full test plan makes more sense for that CR.
I think you're looking for some general guidelines to apply, but the type of product, the type of environment -- regulated, nonregulated, and the expectations that exist within the current company culture -- are considerable variables. You might begin by a review of the past to see what's been effective as you map out what you believe will work for the future. Over time you can build history and learn from experiences -- and chances are when you change jobs or even as the company culture shifts over time, you'll likely find you have to change again. I would emphasize to build test documentation to the degree that it's helpful and practical. And I would build out testing to the extent that it makes sense for each project, reviewing each project small, medium and large as the work becomes available for you and your team.
Dig Deeper on Topics Archive
Related Q&A from Karen N. Johnson
User acceptance testing and system integration testing differ in one key way: the person who does the testing. Learn when to apply UAT vs. SIT. Continue Reading
There are so many resources out there about the ever-changing world of Web design and mobile testing, but to choose the most salient and insightful ... Continue Reading
In this expert response, consultant Karen Johnson describes strategies she uses for browser compatibility testing. Experience and knowledge of common... Continue Reading