In my previous article, "Software testing affected by pressure to release software," I discussed several situations that a quality assurance (QA) testing manager may encounter that
Poor communication is a systemic problem within the IT space and especially QA testing. Communication between the QA testing team and other teams (development, deployment, support, etc.) is often ineffective at the organizational level. Clear and precise communication requires a consistent approach that addresses the needs of the target audience. This can be accomplished by crafting a set of templates for status reports, defects, test cases, test plans and any other tool that is used to communicate. As an example, testing status reports should begin with a set of three to four key performance indicators (KPIs) that give a high-level overview of the testing engagement followed by a standard set of supporting data.
- Overall health
- Milestone schedule
- Resources (staffing)
- Test environments
Example KPI status:
- Issues are low-impact and/or progressing as scheduled.
- Less than 10% behind.
- Issues need to be monitored closely and/or schedule is slipping.
- 11% to 20% delayed.
- Issues require immediate resolution and/or schedule is at risk.
- More than 20% delayed.
Example supporting data for an automation implementation program:
Automation implementation program (AIP):
xxxx of xxxx manual test cases replaced by automated test cases: xx.xx % complete
xxxx of xxxx automated test cases delivered: xx.xx % complete
Automated test case:
xxx work-in-progress against a scheduled xxx: xx.xx % on schedule
xxx completed against a scheduled xxx: xx.xx % on schedule
xxx blocked and move out-of-scope
Defect count as of HH:MM is XXX:
XXX closed XXX assigned, XXX in retest, XXX deferred, and XXX new.
Note: Deferred defects are defects that will not be addressed until made part of the AIP.
The key is to make the communication clear, concise and nonconfrontational -- just the facts. Whenever possible these communication templates should be crafted with the participation of the target audience. Just as an added note: There is always some reluctance to place any aspect of an engagement in the red. However, it is always critical to report the situation as it presents itself so that the management team can begin to manage the risk.
QA testing organizations have undergone tremendous growth over the last few years -- or have been used as the proving grounds for new managerial talent. There is often little, if any, effort to assist or augment these resources as they take on increasingly challenging roles while their previous roles are filled by ever more junior resources. This has led to these resources attempting to monitor (or micromanage) new resources while taking on management roles that usually take years to grow into. So, if you are new to the management role or have had little formal training -- how do you manage? There are twelve fundamental steps involved in managing any testing engagement:
- High-level analysis (initial planning)
- Assemble test team
- Participate in the review of prior activities (design and development)
- Conduct assignment briefing (optional)
- Conduct detailed analysis
- Walk through system design
- Prepare test plan and schedule
- Prepare test content
- Prepare test environment
- Execute tests -- test cycles
- Prepare/submit test evaluation report
- Post test engagement review
If you can master these twelve steps or stages of managing a test engagement, you will be well on the way to being a successful test manager. If you are manager that is new to the testing space, draw on the expertise of the seasoned test leads within your existing team -- always take advantage of any existing practices that help address the management challenge while striving for continuous quality improvement. If your organization has a project management office (PMO) or senior managers that act as mentors, take advantage of these assets.
Testing capacity is the overall ability to plan and execute testing engagements. Increased testing capacity is essential to managing testing engagements. With increased capacity comes the flexibility to move resources and capabilities from test engagements that are not at risk to ones that are -- How does one increase testing capacity? There are several paths available for increasing testing capacity: test automation, test methodologies/standards, training, outsourcing/near-sourcing, Testing Centers of Excellence and resource acquisition (new hires or consultants). The choice of which capacity initiative would harvest the greatest return is dependent on your situation.
Capacity: Test automation
Several industries use standard GUI or Web application interfaces that lend themselves to functional test automation, but minimal investment has been made to leverage these technologies to help alleviate the manual testing effort associated with testing engagements. Unfortunately many test automation initiatives fail or do not harvest sufficient returns because of insufficient expertise -- always invest at least as much in training and on-site consulting services as you invest in the actual automation tool. This is especially important during the first six months to a year of test automation ramp-up. A well structured test automation program should harvest a 4-to-1 return in terms of current hours versus future hours during the first year.
Capacity: Test methodologies and standards
Most IT organizations do not have a formal testing methodology; this creates an unnecessary burden on test resources and prevents (or short circuits) any opportunities for initiating testing efficiencies. When the topic of methodologies and standards is brought forward, many organizations fear an influx of unnecessary paper and processes -- this is not the objective of testing methodologies and standards. The objective is to create a repeatable, consistent and measurable process that expedites the testing process -- today's test management tools provide a paperless mechanism for process improvement. A well-structured test methodology should harvest a 3-to-1 return in terms of current hours versus future hours once the methodology has been internalized by the test organization. Before moving forward with this type of initiative, ensure that you have the correct mix of on-site expertise -- for example, an experienced test architect.
By its very nature, an offshore program is prone to difficulties in terms of cultural differences, communication and relationship building. Offshore components are often perceived as a location to do the mechanical, simple aspects of any testing effort. This division of labor raises the managerial costs of performing testing and only addresses the most basic testing needs. The overburden to the QA/testing organization in terms of both the managerial cost and lost collateral caused by challenges when engaging offshore resources may prevent the harvesting of a significant return from the offshore component. In order to have a successful offshore component you need an experienced offshore or near-shore partner and the process maturity to truly leverage these resources.
A strong testing team has business knowledge and test process knowledge. Testing resources assigned to work on projects should be given opportunity to receive training in how to use the applications. Resources may be placed within standard training sessions that are conducted for customer service or other users. Additionally, they may observe the end users' daily use or get a walkthrough from them on how they use the applications on a daily basis.
This type of training will give them the necessary perspective to understand how the application is used as well as obtain an understanding of business process workflow. There are both hard (current vs. future hours) and soft (networking) returns to be gained from training. At a minimum, the testing organization should expect a 2-to-1 return in terms of hours and a testing team that feels less isolated from the business community.
Capacity: Testing Centers of Excellence (TCoE)
The TCoE model requires the creation of a small group of managers, testing specialists and technical testing components to leverage management practices, testing knowledge, methodology and resources across testing engagements. The testing specialists educate and supplement the engagements testing resources to ensure consistency -- this allows for maximum project penetration with a minimum number of human resources. The TCoE crafts a base of shared best management and testing practices across engagements to increase overall testing efficiencies. Reusable test plans, test automation, test cases, fault reports, etc. are created. This group will maintain a repository of these deliverables as they are created for each project and provide them as a framework and samples, for future efforts. This process increases the efficiency of the testing effort as reusable components grow. A TCoE is a testing "force multiplier," and the ROI is dependent on current testing efficiencies and the current defect/failure rate in production -- a 10-to-1 return on investment would not be an unreasonable assertion.
Capacity: Resource acquisition
Acquiring resources to expand testing capacity should always be taken in the context of long-term versus short-term capacity. To address short-term capacity challenges, simply obtain the services of experienced testing consultants early enough in the engagement to harvest the maximum return. To address long-term capacity challenges, hire the appropriate mix of talent or form a long-term relationship with one or more testing vendors.
In closing, QA managers must guard against being co-opted into the "politics of science" -- becoming production managers more than QA managers. Focus on measuring the quality of the product, completing testing, and clearly communicating the current status of the product -- not deadlines. Of course this comes face to face with "But we have to get it out the door so we can make money." It is a constant battle -- just ensure you fulfill your responsibility to communicate the true status of the product under test and the current risks associated with releasing the product.
About the author: David W. "DJ" Johnson is a senior computer systems analyst with over 20 years of experience in information technology across several industries, having played key roles in business needs analysis, software design, software development, testing, training, implementation, organizational assessments and support of business solutions. He has developed specific expertise over the past 15 years on implementing "testware" including test strategies, test planning, test automation and test management solutions.
This was first published in January 2009