The QA/testing space has always been stalked by the "Politics of Science" -- a desire to move the results in a direction that meet the expectations or desires of senior project leaders. It's what I refer to as the "rush to failure," based on factors surrounding the application that have more to do with perceived rewards (bonuses, market share, and political expediency) than the actual value proposition of the software.
QA/testing, as one of the last gates before delivery, has always felt the brunt of this pressure to release software, software that may actually represent a negative value proposition to production. With the introduction of adaptive development methodologies such as agile software development, Extreme Programming, and Scrum this pressure has become more strident and the timelines more compressed. To be fair, this is often a result of "agile-like" development (read undisciplined cowboy development) than actual application of adaptive methodologies.
QA/test manager yields to pressure to expedite release
Under these new pressures QA/test managers are pressured to take risks in terms of testing scope and schedule that result in inadequate Test coverage and ultimately unexpected failures in production. The QA/test manager often is brought up on the carpet to answer this typical series of questions:
- Why didn't you test for this?
- Why didn't you explain the risks?
- Why didn't you ensure the quality of the product before signing off?
The simplest response would be that you were pressured into making a decision to proceed that you knew was incorrect. Though this may in fact be correct, that response will not gain much traction in a politically heated environment. And for the most part, QA/test managers do not have the political experience to win in that environment.
QA managers need to mostly focus on the quality of the product and complete testing -- not deadlines. Of course, this conflicts with "but we have to get it out the door so we can make money." It is a constant battle. The QA/test manager should focus on quality, while the project management team focuses on the risk/benefit analysis. Let's look at each one of the above questions and determine what could be going on here.
1. Why didn't you test for this?If something wasn't tested, it's usually because the feature was deemed to be out of scope for testing or adequate resources were not made available to test for this feature. The question occurs for one of two reasons: The questioner is trying to disallow ownership of the issue or the questioner did not truly understand the scope of testing and the inherent risks.
There isn't much you can do about questioners' disallowing ownership except to clearly communicate scope and risks on an ongoing basis. But the second reason can be addressed by working together to truly understand and communicate testing scope and inherent risks so that informed decisions can be made. This usually takes the form of a weekly or daily status report coupled with a mechanism to expedite issues.
2. Why didn't you explain the risks?In most cases the QA/test manager did attempt to explain the risks but was ignored or simply did not explain the risk in a way that the decision-makers understood. This question often carries many of the same challenges as "Why didn't you test for this?" but with the additional weight of serious production failures.
This is a communication issue between the QA/test manager and the decision-makers. The QA/test manager has to communicate the risks in terms of the impact on the value proposition of the software. For example, if this risk becomes a production issue, what is the impact on the business? The technical reasons behind the risks still need to be communicated, but the business impacts are the paramount message. Include the technical dissertations as attachments or appendices. Do not cloud the risk with technical details that may not be understood.
3. Why didn't you ensure the quality of the product before signing off?This is one of the most interesting questions that can be posed to a QA/test manager. Testing cannot be responsible for the quality of a product, since testing did not design, build, fix, or deploy the product. Testing simply measures the product in terms of expected vs. observed behavior and then provides this information to decision-makers who determine if this observation should be addressed before the product is released.
The expectation being conveyed with this question is that testing behaves as a safety net to trap quality issues before they reach production. The problem with that assumption is that testing alone does not have the capacity to ensure a quality product is released. The QA/test manager must communicate what testing can and cannot bring to the quality question. And if the expectation of testing as a safety net remains in place, move testing further up the development stream into the design and development stages.
QA/test manager does not yield to pressure to expedite release
Experienced or empowered QA/test managers often do not yield to pressures to expedite the release of an untested product. This certainly prevents the issue of inadequate test coverage and should prevent unexpected failures in production. If that's the case, the QA/test manager will be challenged to respond to these questions:
- Why isn't testing completed on time or on budget?
- Why is testing taking so long?
And these would be their very predictable responses:
- Why is testing not completed on time/on budget?
- The rest of the project was delayed or poorly implemented
- Timelines were not agreed to and proved to be unrealistic
- Why is testing taking so long?
- The test environment was not ready
- The code was not ready
- The test data was not available
- The bug fixes are taking to long
Taking into account our earlier discussion about the QA/test manager who yielded to the pressures to release or was not empowered to delay the release, how can you better manage the expectations of your peers and decision makers? This breaks down into challenges that are common in IT and especially pertinent to anyone managing a QA/testing effort: failure to communicate clearly, poor project/test planning, and lack of testing capacity.
About the author: David W. Johnson (DJ) is a senior computer systems analyst with over 20 years of experience in IT across several industries. He has played key roles in business needs analysis, software design, software development, testing, training, implementation, organizational assessments and support of business solutions. David has developed specific expertise over the past 12 years on implementing "Testware," including test strategies, test planning, test automation and test management solutions. You may contact David at [email protected].