News Stay informed about the latest enterprise technology news and product updates.

Building an Agile test practice: Q&A with advocates for quality

What does it take to add a test practice on top of a high-functioning Agile team? The task at Menlo Innovations was to incorporate QA into their practices. How did they do it? Matt Heusser interviews two quality advocates from Menlo Innovations to find out.

Many development organizations today exist without a testing group. When Extreme Programming (XP) was first popularized,...

it did not even have a role for test or QA on the team.

Times have changed, and some XP teams are adding testers to the ranks. For the people at Menlo Innovations, it's been an ongoing process. SSQ contributor Matt Heusser sat down with Tracy Beeson and Lisamarie Babek, two advocates for quality at Menlo Innovations, and discussed what they've learned -- and where the journey is taking them next.

SSQ: Can you set the stage for me -- what was the state of the team before setting up a QA discipline? What made you think you wanted one?

Beeson: In 2002, I'd just lost my job, we'd recently moved to Michigan, my husband and I were walking down the street on a Saturday morning and we walked past Menlo's door. We got to talking with James, who was in the office at the time. He invited me to take their Agile Explained class. I was at the point in my career where I'd been teaching automated testing software, but had never actually "done QA," so I was looking for a position that would allow me to do QA for real. After taking the class, Agile -- and the methodology they were describing -- made a lot of sense to me.

Unfortunately, at the time, Menlo was still a small startup and didn't have much QA work available, so I went on to work at a couple different companies in Ann Arbor. I did keep in touch with Rich and James. In 2007, James finally "officially" approached me about coming to Menlo to help them incorporate QA into their practices.

Babik: Since 2007, it's been an evolving process; Tracy didn't just come in and boom! we had QA. We're still experimenting, and we haven't found a perfect mix yet. We've experimented with new tools; for example, we tried incorporating Fitnesse tests for functional testing. Unfortunately, it's an issue of cost. While we still have some Fitnesse tests for some of our projects, overall, our clients haven't been able to strike a balance between cost and value for automated functional testing.

Establishing the test/QA practice

SSQ: So now you've hired someone -- did you start with just one? What did she do on day one, day two, day three?

Beeson: My first few weeks were spent learning what Menlo does, what works and what doesn't work, and why. I remember being pushed to be involved in the planning game and defining stories, which was so different than other companies where testing should be seen and not heard. In those other places, QA was literally banished to the basement, expected to work overtime hours, pushed to get products out the door, and blamed when they were released with defects. At Menlo, the expectation was that I would collaborate with the team, work in the same space (no basement!), and advocate for quality. The key is that before you can change a process you need to understand it. Menlo's process was working well; we just wanted to step it up a notch.

Babik: The most visible thing we did was that we added QA to the step between code completion and show & tell for the client. At Menlo we call this process "green dotting," which is a collaborative decision involving developers, High-Tech Anthropologists®, and at times the project manager. We start by asking questions about how the software was built, as described by our developers. Then we consider how the desired functionality is described by our High-Tech Anthropologists®. Then we examine the difference between those two for weaknesses. The final step is exploring the software by hand. We'd make a list of items that aren't done, and then let the developers go complete the work. In some cases, we'll move the story forward but write other cards (requirements) for follow-up work.

SSQ: So how did Tracy land at Menlo; what was the interview process?

Babik: Rich and James had been getting to know Tracy for awhile, but she did not go through our traditional hiring process -- our ritual of extreme interviewing. This is our company's equivalent of "joining the tribe" -- a common experience that all team members go through before joining the team. It was a bit of a hiccup that Tracy didn't get hired that way.

Beeson: What they did do, however, is pull together a group of about eight to ten current team members to pepper me with questions so they could get to know me. It was a good compromise.

SSQ: How did you decide on your testers-to-everything-else ratio?

Beeson: It's a work in progress. Right now we have two testers; we've fluctuated between two and four, and sometimes other people fill in a QA role part-time. I would say we don't have enough (laughs). As a compromise, we use cross-functional pairing (with developers and High-Tech Anthropologists ®) to accomplish QA tasks, which allows us more flexibility and builds a greater understanding across roles.

SSQ: How do you choose to staff QA?

Babik: Like all work billed to a client, we play the "planning game" once a week. QA is planned just any other resource -- clients make choices about which stories will be played. Many of the factors that go into making those choices include the current stage of the project, proximity to a deadline (such as a trade show), the sheer amount of development work going on, etc. For example, if there are six pairs of developers, we'd need a couple pair of full-time QA folks.

Tips and Techniques

SSQ: I understand you've got some interesting collaborative techniques to bring the customer into the conversation of what bugs to fix. Can you tell us about the "Bug Board"?

Babik: Rather than managing our "bugs" in a database, where they can get lost, we've created the "Bug Board" to keep them front and center.

Beeson: Every bug that is discovered by us, our clients, or the users, is written on a story card just as a new feature would be. The title and the story number are then written on a colored sticky note. The color of the sticky notes represents clusters of defects, with the brightest color representing the worst thing we could do to the user. For example, on the Accuri Cytometer project, we use fuchsia colored sticky notes for bugs related to loss of data. Each week we ask our client to take the sticky notes and place them on the "Bug Board" according to their likelihood of occurrence and their impact on the user. This creates four quadrants that can then be used to prioritize bugs during the planning game. It also provides a high-level picture of how buggy the app is.

Dig Deeper on Software Test Design and Planning

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchMicroservices

TheServerSide

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close