Managing software testing processes can be difficult. Quality metrics don’t always provide the data needed to make informed decisions. As a senior manager, you’re expected to make sense of a report full of numbers and then make funding decisions based on the value test contributes to the organization. And without adequate funding for testing, software quality suffers. You need more than numbers.
“The challenge in how upper management should manage testing processes is that testing is a black art… it’s hard to get a view into what test teams are doing. Upper management wants a view into what these guys are doing so they can see if they’re getting value,” says Alan Page, principal SDET (software development engineer in test), Microsoft Corp. and lead author of How We Test Software at Microsoft.
“The average test manager or director or VP of quality does not do a good job of connecting to the business value that testing delivers to stakeholders. They don’t have a service-oriented focus,” says Rex Black, president and principal consultant, Rex Black Consulting Services Inc. When test organizations have a service-oriented focus, says Black, “It changes the entire conversation about funding” and the perception of testing as “a process cop that keeps us out of trouble to an enabler of delivering software with the proper level of quality on time and on budget.”
The key to better managing software testing processes is working together. Upper management should work “collaboratively with the test team to arrive at the proper way of delivering test results,” says Black. “A best practice situation would be that the test team is a source of information for management that helps them make smarter project decisions, especially with respect to quality.”
This best practice is often the exception. “One of the common mistakes that test groups make when reporting test status to managers is that they do it in terms of bugs found, test cases written and executed, how many pass and fail. These are tactical pieces of information that are meaningful to the test manager, but to a senior manager who’s never been in testing, making sense out of those data points is impossible,” says Black.
“This is what I refer to as the fire hose of data problem. Test managers deliver to management this incredible dump of very detailed data and they see the patterns in it, but they’re not obvious to non-testers. Managers get confused and draw the wrong conclusions,” he says.
As an example, Page offers the testing of a Facebook game. If the game is tested by two individuals, one who plays as a user and the other who specifically finds functional bugs, one of the testers will find a lot of bugs and the other will not. When upper management looks at the reports, which include the number of bugs found by each tester, they won’t understand where the discrepancy came from and may conclude that one tester isn’t doing his job.
“In the absence of information, people will make things up,” says Page. “So if they don’t know what’s going on, they’ll assume, ‘This person isn’t finding a lot of bugs; they’re not doing anything.’”
These assumptions could be avoided in the Facebook game scenario if management understood the purpose of each test and what the results meant. While requiring test managers to interpret the data in their reports is a step in the right direction, it still may not demonstrate testing’s value. “Successful test teams work with their stakeholders, including executive management, to understand what their information needs are,” says Black. “The typical senior manager’s typical question is, ‘What are the risks associated with failures when we put it in production or give it to customers?’ Test managers can educate senior managers, explaining that risks can be reduced to an acceptable level,” he says.
“Unless test managers discuss in an intelligent way what they can and cannot contribute to the organization, the typical executive will tend to expect that the test organization, however well or poorly it’s funded, so long as it exists, will be a form of magic pixie dust that will drive out problems before software is delivered,” says Black.
Black advises senior managers who do not feel like they’re getting the information they need, to “have a conversation with the test team managers. Determine the mission of the test team, what the objectives that it serves are and how you measure the effectiveness and efficiency in which these objectives are achieved,” he says.
Senior management must continually communicate its expectations to the test team. “The majority of time I see something break down, it’s a people problem. Either management not communicating what they want or expect, or test not communicating what they’re doing,” says Page.
“I’m a big fan of trust in the workplace and a results-based work environment. I like to give the test team the freedom to do what they think is right -- they’re the experts -- and make sure they can communicate that progress and value to management,” says Page.
How does senior management view the test group in your organization? What metrics are the best to measure quality? Let us know by sending email to firstname.lastname@example.org.
Dig Deeper on Topics Archive
RSA 2020 wrap-up: VMware Carbon Black integrations; MAM for BYOD; how to handle non-employees
The Security Interviews: Inside the world of bug bounties
What I learned at VMworld 2019: The latest for Workspace ONE Intelligent Hub and Intelligence
Majority of organisations struggling with cloud security