News Stay informed about the latest enterprise technology news and product updates.

The case for software tester, analyst partnerships

Richard Bender of Bender RBT Inc., experts in Requirements Based Testing (RBT), has over 40 years of experience in software, with a primary focus on quality assurance and testing. He has consulted internationally to large and small corporations, government agencies and the military. He is giving a presentation at the upcoming StarWest on the role of the analyst in testing, which he discusses here.

How has the role of analyst evolved in software development projects?
Initially we were all jacks of all trades. Over the years people started recognizing that you needed the domain expertise, so we got to specialization. And people recognized that testing was a specialty unto itself, which the hardware people had recognized for many decades. In more recent years, to make budget cuts they will severely curtail staff on the test side and have the analysts take the primary role in testing. I understand the rationale—the analysts wrote the requirements, therefore they should be able to test the code to make sure it meets the requirements. But that has a series of fatal flaws. What are the challenges/issues of analysts doing testing?

The majority of defects have their root cause in bad requirements, so if you have the people writing the requirements just assuming that the requirements are perfect, you have over half the defects not getting out of the system. And it forces the whole process to be sequential because the analysts won't get to testing until they finish the requirements. Usually by the time code is well under way is when they're getting serious about testing, and developers are stumbling across problems with requirements. So they never really get traction on the testing until you're deep into the coding side, and then everyone blames testing for why the project is late.

When you have a separate test group working in partnership with the analyst you can have about 90% of the functional test cases on the shelf before the project starts, so you're getting a lot more concurrency. Plus, by having a separate testers you get checks and balances. You have the dedicated resources and they can work more concurrently. Also, one of the more underappreciated things is human nature. If you're an analyst and you find a defect in the requirements, it means you did something wrong. If you're a tester and you find something wrong with the requirements, you did something right. People like to do something right, so people's whose job it is to find defects will do that really well.

Does partnering add people or costs to a project team?
A good partnership between the analysts, testers and developers actually reduces the total headcount and total time to market/cost to market by about 25-30%, and when you deliver you're at or near zero defect. That's because you're minimizing scrap and rework—where we spend a huge portion of our effort. If you have people testing the requirements early you get those defects out, and the process we use leads pretty quickly to defect avoidance, not just defect detection. What are some examples of how analysts and testers partner?
One of our key processes is ambiguity review. By looking for problems early in the requirements writing process, we find that after we've reviewed four or five use cases of a given analyst, we find the initial ambiguity rate drops by a factor of 95%. The analyst has gotten feedback about what was not overly clear in the prior use cases and they tend to clean that up. What's more critical about this is we have studies that show if something is ambiguous in the requirements, there's nearly 100% probability that there will be one or more defects in the resulting code. So by getting rid of 95% of these issues you're eliminating all that portion of defects before you start the coding process. How else do analysts/testers partner up front?

One of our tests steps is validating the requirements against the objectives, to make sure what function/feature we deliver is focused on solving the fundamental goals of that application. Another is scenario-driven tests of the robustness of the requirements, which is the what-if game. Those scenarios are test cases. The testers have a fairly global view of the application while analysts and developers tend to have a fairly stovepiped view because they're working on a piece of the system. Between the two groups they come up with a robust set of scenarios. Testers are really good at coming up with exception cases.

Then as the requirements get written, the testers and analysts can do the ambiguity reviews in partnership. Once that's done the testers design the test scripts. If you're doing a rigorous approach to testing it's a critical skill to be able to design a sufficient set of tests that mathematically equal the features/functions and requirements, and that would be able to verify that all that design and code had been implemented. So we like to focus the testers on that skill. Then the testers take those test cases back to the analyst for review, to make sure they understood the requirements properly, and the analysts [may] actually find bugs in their own requirements by looking at the test cases. Then together they take those test cases back to the product manager, the users, and they ask them to review the tests, and this is happening in real time with writing the requirements.

Does this speed the process?
Our goal is to give the author concrete feedback within 24-48 hours when they write a use case, and then once those ambiguities get resolved, within another day or so to have the test cases for that use case done. That allows you to either, in a very short time lag or concurrently, give the users the requirements and test cases. With complicated functions it's actually easier for the users to review tests than the requirements, so you're moving user acceptance tests up before coding starts. How do analysts/testers partner later in the test cycle?
Building the executable tests, running the tests, verifying the results is another area where the analyst and the testers can partner. The analysts obviously understand what they intended, and by actually running the tests you're getting a feel about usability. There's this strong partnering between the analyst and the tester as opposed to an adversarial relationship, or even worse, as opposed to the analyst taking over responsibility for the testing; I have never seen any organization without a strong test group that produced high-quality software. By good partnering, you minimize scrap and rework, [and achieve] early defect detection, shortened time to deliver and cost to deliver, and really strong quality at delivery.

Dig Deeper on Topics Archive

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.