News Stay informed about the latest enterprise technology news and product updates.

Techniques for successful software test teams

Challenges abound when managing a test team. Judy McKay, author of Managing the Test People, talks about those challenges, how to maintain morale, and how to break down the wall between developers and testers.


How has the role of quality assurance and testing evolved?
It's becoming more recognized as a career specialty. In beginning companies, testing is done initially by the developers. They realize they don't like it or they're not very good at it, so they bring in a few test people or they use the lesser developers to do testing. That doesn't work out so well, so they bring in a test group. It evolves that way. But now you see the advent of professional testers and professional test certifications. Testing is becoming a more recognized specialty.
Read an excerpt from Managing the Test People
In Chapter 6, "Keeping Your Beast Effective," Judy McKay examines the root of an effective test team. She also explores effective communication techniques and how to determine the optimal composition of your team.

>> Download the excerpt
What are some of the unique management issues in QA and testing?
The pressure and the feeling, unfairly but truly, that they're held responsible for quality. You can test the bejeebers out of software, but you can't make it work better. All you can do is document it. And certainly we feed that back to improve the general process. But to be held responsible for the quality, particularly if we're not involved until it's completed, it's not fair. And to try to keep people motivated is difficult. When you hire testers you're looking for the perfectionists. Those are the people who are so frustrated when they're not allowed to do the job they feel needs to be done. This is why you see the long hours people work. It's hard to keep them from burning out. What are some other people issues a QA or test manager faces?
Turnover can be one, and that generally comes from frustration. Sometimes compensation isn't adequate; in some cases the developers are paid proportionately way more than the testers. I've been pretty successful in taking it back to HR and comparing skill levels and training and experience and job knowledge, but it's still a big issue in a lot of organizations. Also, project bonuses can be a real de-motivator if based on an end date. The quality people are supposed to be making sure the quality is acceptable, not that the date is there. That can cause a lot of problems within the group. The people who make really good testers will stand up and say, "No, it's not going." But you don't win popularity contests that way, so it takes a good manager to back them up. Your book suggests that developers are predisposed to judge QA and testers as evil. How can testers change that perception?
The less experienced developers feel like they're being attacked. The more experienced developers say, "Thank you for finding this before it shipped." But you can get in this adversarial role of "You people are just evil. Why would anyone do that to the software?' And to some point, yes, we are sitting back there rubbing our hands saying, "I bet I can break this." But that's our job, to break it before the customer does. So it's a necessary evil. We do bring a different mindset than the developers because the developers are creating, and they sometimes look at us as destructive rather than trying to help them build the best product they can, as a team.

McKay's tips for hiring good software testers
Attitude. More than anything that's what I hire for. I can teach someone the technical stuff if they're smart, but I can't teach them to have good negotiating skills, to be diplomatic, to have an attitude of interest and curiosity toward their jobs, and to really care

Perfectionists. But it's a fine line. You want it to be perfect, but you have to be able to accept that it's not. You want someone who's a realist but is always trying.

Honesty. If things aren't going well, you need to tell me. Or if you mess something up, you need to tell me. I don't want to hear everything's great; that doesn't help anybody.

Confidence. Not cocky, but being confident in what you know and what you're finding, and being able to present that professionally. But not being so overconfident that you annoy everyone.

Organized. There are just so many things going on all at the same the time. It's an interrupt-driven job, and people don't survive very long if they can't deal with the interruptions.

Maturity. Being able to deal with things as an adult.

Empathy. The ability to think about what developers are thinking, what the customer's thinking, what management's point of view is. Being able to put yourself in someone else's shoes. Being able to say, "I know the developer's tired and under pressure and he's made 400 fixes today, and I'm coming in with one more problem."

Sense of humor. People last a lot longer if they have a sense of humor. They're happier in their jobs, they're easier for everybody to work with, and they're less likely to be offended.
[To change that perception], make it very clear that we are working together to create a product -- I am not criticizing what you did. I found that makes a huge difference, and I hammer that into people -- to watch their tone, and to remember you're not criticizing the developer; we're working together to make a better product. As soon as you start bringing that perspective, it changes around really quickly. With an increased effort to build quality into the software development lifecycle, are these walls starting to come down?
It's becoming more accepted that the test people are more than just test people, that they can be involved early and contribute valuable information. You only have to go to one requirements review and point out one major flaw. And that's it -- the wall's gone. It's not that hard to fix, but [QA] people say, "Well, we're not invited to the meetings." So, invite yourself. I wormed my way into a database design meeting one time by bringing in a box of donuts. I only had to point out one flaw in their design and they said, "You can come to our meetings, and you don't even have to bring the donuts." A lot of these walls are built because people haven't really tried to walk across the hall. You say the goal of QA is not to find bugs, but to thoroughly execute a test plan that mitigates risk. Can you talk about that?
You can't just focus your goals on finding bugs. You also have to make sure that one of your goals is to cover the amount of the software to mitigate the risk that you planned to mitigate.

So I sit down with the project stakeholders and ask what matters, because some things are more important than others. I try to rank everything. I usually take the requirements or whatever I've got and look at the quality risk aspects of it, like what about performance, what about security -- things that might not be stated in the requirements, as well as all the things in the requirements. Then I organize it so I can give these guys a number so that I know how important these are. If I run out of time, which I usually will, I know I've tested the most important things first. It [also] lets the project team get in on the enormity of the task of testing, of how much stuff we have to consider. There's still that tendency to underestimate the test effort industry-wide. It creates the awareness that you're probably not going to have time to test everything, so it pulls them into our world. The other part is it helps keep the testers on track with what really matters.

[ A test plan] also tells me how much emphasis I need to put on something. So if this is something that's a critical risk area, then I'll need to test it more thoroughly. It also may determine who I give the testing assignment to. You write that a common mistake in QA management is to take ownership of problems that aren't yours.
Say, for example, the developers don't do unit testing. So we work the longer hours and basically take responsibility for that testing that wasn't done upstream, which is the wrong thing to do because it means we're doing their work and we're not getting the time we need to do our work. So we need to be sure that we document where that bug should have been found. If that should have been found in unit testing and it was found in system testing, then we've got a hole in the process and we've got to fix that because that means there's a bunch of system testing work that's not able to get done because we're falling over unit test bugs. How do you use this data to build your case for the return on investment (ROI) of QA?
There are some pretty easy cost of quality numbers you can run that show how much cheaper it is to find bugs earlier in the cycle. It's a pretty easy sell once management sees those kinds of numbers, and it really helps with [the argument of] "We don't have time to do unit testing" or "We don't have time to have requirements reviews."


Judy McKay is the author of Managing the Test People. She has spent the last 25 years working in the high tech...

industry with particular focus on software quality assurance. Her career has spanned across commercial software companies, aerospace, foreign-owned R&D and various Internet companies. She has been conducting training seminars nationwide for eight years.

Dig Deeper on Cloud Application Testing

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchMicroservices

TheServerSide.com

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

DevOpsAgenda

Close