Manage Learn to apply best practices and optimize your operations.

Software testers must understand the business side of software quality

Software testers are part of a team with a variety of quality responsibilities, explains testing expert Scott Barber. Testers who feel that their concerns are ignored ought to view their software projects in the context of a business project and consider issues the business side must deal with.

Scott Barber, software tester
Scott Barber

I've been to quite a few conferences this summer, and at each I've had at least one conversation similar to the following:

Tester: Scott, how can I get the team to take me seriously?

Me: You don't feel that you are being taken seriously? Can you give me some examples?

Tester: Well, I don't get the amount of time I tell them I need to test, the UI gets changed even after I tell them that changing it breaks my automated scripts, my defects are regularly reprioritized, and the system goes into production even when I'm reporting that it's not ready.

Me: Why do you think that is?

Tester: <confused look> Um, what do you mean? They obviously don't care about quality.

Me: Have you considered that there might be completely valid reasons that those decisions are being made?

Tester: Valid reasons for accepting bad quality?!? Such as…?

I am continually amazed by responses like this, but I guess I shouldn't be. Most testers I meet simply have not been exposed to the virtually impossible business challenges regularly facing development projects that often lead to decisions that appear completely counter to a commitment to quality when taken out of context. The fact is that there are a huge number of factors influencing a software development project that, at any particular point in the project, may rightly take precedence over an individual tester's assessment of quality. Given their lack of exposure, it's no wonder testers seem to habitually take a "my team doesn't listen to me" point of view.

I blame managers for this situation more than I blame testers, but I hold testers at least partly responsible for not making more of an effort to understand the logic behind these seemingly bizarre decisions. That line of thinking drives how I address questions such as, "What valid reason could someone have for accepting bad quality?" For example:

Me: How about meeting contractual deadlines, changing the UI because someone figured out that it was confusing to target users, prioritizing defects in such a way as to encourage stakeholders to start budgeting for "release two," or going into production to ensure that something is available when the TV commercial runs?

Tester: Why not change the contract, train the users, tell the stakeholders to plan for another release, and cancel the commercial? Wouldn't those be better decisions?

Me: Sometimes, but it's not always that easy. Sometimes those options cost many millions of dollars. Can you honestly say that the bugs that get out as a result of you not having the schedule you'd like, UI changes, reprioritized defects, and going into production before you are ready will have a loaded cost greater than canceling a TV commercial three days before it's due to run?

Tester: Probably not, but if they are going to ship anyway, why bother testing?

Me: Sometimes there isn't a good reason. In fact, sometimes I do recommend that some teams stop, or at least drastically scale back, testing activities because it is a waste of time and money if the information that is collected isn't going to get used. On the other hand, maybe the stakeholders are funding the testing effort to figure out what support staff will be needed once the system goes into production or how many extra people will need to be on hand in the call center until the first patch release.

This is about the point in the conversation when the tester either decides that I don't know what I'm talking about and walks away or decides that he should probably ask some different questions when he gets back from the conference.

I recognize that it is completely reasonable for people to draw conclusions based on the information they have. I further recognize that when those people are testers who are good at what they do because they don't blindly trust things that don't make sense, it's easy to draw the "my team doesn't care about quality" conclusion.

I even recognize that the last thing a business executive wants in the board room while trying to decide whether to lose money as a result of buggy software or to lose money as a result of the software not being ready on time is a tester saying, "It's not about the money; it's about making quality software!" But most important, I recognize that testers need to stop ignoring the fact that there are valid perspectives other than their own from which sound software development decisions can be made.

Software testing, quality
best practices
Software quality best practices

The benefits of testing software by project phase

Determining the testing organization's place within a company

There was a time when it was commonly believed that the earth was the center of the universe. It was a reasonable belief that made perfect sense from our perspective. It's not like at the time we could look through a giant telescope, or launch a deep space probe to get a different point of view. I actually think that this "Center of the Universe Syndrome" is natural. Given no reason to do otherwise, why wouldn't a person look at something from his own perspective?

Unfortunately, this Syndrome is not particularly helpful to a tester who is genuinely concerned with software quality. To really have a positive impact on quality, I think testers would do well to consider the following before deciding that the team or company doesn't care about quality, or doesn't take testing seriously:

    • Someone, somewhere, is paying for this team to develop this software. In most cases that someone considers a profitable project to be of acceptable quality.



    • Few developers would have jobs if not for that someone paying for them to develop software for the purposes of making a profit.



  • Even fewer testers would have jobs if it weren't for those developers trying to build software for the someone looking to earn a profit.

When all is said and done, in many organizations the test team is no more at the center of the universe than the Earth's moon. Think about it. In your team, does the test team (the moon) orbit the development team (the Earth), which is guided by the gravity of the business (the sun), which in turn is weaving a path through the universe of business, finance, and competitive pressures? If so, maybe it makes sense to think about the things you can influence -- such as testing methods, improved communication and test prioritization -- as opposed to things you probably can't -- like budget, business priorities, and contractual obligations.

That is similar to how the Earth's moon influences tides, causes solar eclipses, and inspires awe and a spirit of exploration in the inhabitants of Earth, but it doesn't seem to feel as though it isn't taken seriously because it can't change the direction that the Earth orbits the sun.

----------------------------------------
About the author: Scott Barber is the chief technologist of PerfTestPlus, vice president of operations and executive director of the Association for Software Testing and co-founder of the Workshop on Performance and Reliability.

This was last published in August 2008

Dig Deeper on Software Testing and QA Fundamentals

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchMicroservices

TheServerSide

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close