Manage Learn to apply best practices and optimize your operations.

The controversy surrounding the schools of software testing

Discussions about the schools of software testing often turn confrontational when testers should embrace the fact that there are different points of view.

Scott Barber, software tester
Scott Barber

Periodically, discussions break out in various software testing communities around the Web regarding the schools of software testing.

As I write this, there are discussions going in SQAForums, on the Software-Testing Yahoo! group, and various blogs that (at least up to the time I started writing this piece) reside on or are fed to Testing Reflections. In principle, I'm always pleased when these discussions break out. The point of identifying the schools in the first place was to increase the overall awareness of the diversity in ideologies, practices, and values (i.e. schools of thought) in our field and to stimulate discussion about the situational pros and cons of each. That said, the discussions that actually take place tend to drift off in one or more directions that end up being disappointing, unnecessarily confrontational, and generally not useful.

After witnessing this pattern, participating in these recent discussions, and listening to comments from those who followed the discussions for several years, I've identified several areas in which these discussions go awry. Below, I call those out and share my thoughts about each. But before I do, I would be remiss if I didn't remind folks of the following:

  • I am a self-identified member and champion of the Context-Driven School of software testing. Some people identify me as a thought-leader thereof.

  • I am, and have been for many years, a consultant, trainer and leader of various communities of software testers. As such, I encounter a lot of individuals and organizations who have their own unique collection of views, opinions, favorite practices, default processes, and personal preferences related to testing. For the most part, I've found these individuals and organizations to be thoughtful and effective as testers and to have valuable contributions to make to the field of software testing.

  • I believe that the fact that different people with different experiences in different organizations have different ideas about testing to be one of the best parts of our field.
    No one [at least no one credible who I am aware of] claims that these are the only possible schools of thought or that schools won't evolve over time.
    ,

  • I believe that standardizing on any one set of ideas about software testing would be among the worst possible things I could imagine for our field (even if they were my favorite ideas). Our diversity is our strength.

  • While I certainly have my own preferences and biases, I do not believe that they are best, or even appropriate, for every situation.

  • I like it when someone challenges my ideas. Especially when that someone is intelligent, thoughtful, experienced, educated, passionate, and in absolute disagreement with me about something we've both put a lot of thought into. Many of my best ideas are the result of debates I've had where someone shot holes in my theories, refuted my premise, or otherwise led to me to enhance my own ideas.

Now that we've got all of that out of the way, my observations:

Some people seem to be offended by the notion of schools of software testing because they didn't like the tone or bias of a particular slide deck, article, presentation, or discussion about the schools.

I have never understood why a person dismisses an idea or stereotypes an entire group simply because one (or even several) presentation of that idea is offensive. If you are offended by the way someone expresses an idea, object to the way he expresses the idea. It's not fair (or reasonable) to decide that everyone who supports the idea, or that the idea itself, is offensive because the first person you heard present it offended you. Like it or not, sometimes offensive people have good ideas. And yes, I fully acknowledge that there is no small amount of material that has been published promoting ideas that I support that I find offensive.

Some people think because they don't fit neatly into a single school that it is their sworn duty to publicly oppose the notion of schools of testing.

I don't get this either. I first stumbled upon a presentation about the schools of testing long before I'd heard the names Cem Kaner, James Bach, or Bret Pettichord. I'd never been to a conference, and I don't think I'd started frequenting forums. My first thought was, "Wow, cool! Someone has tried to classify all these divergent views! And here I thought that I was an idiot for not being able to figure out how all these views were the same!"

The notion that there were different points of view about software testing had never occurred to me, but my experiences certainly suggested that different people and different organizations had sometimes wildly different views about what they expected from testing and testers. I didn't declare a school until several years later -- after spending a lot of time considering what kind of tester I wanted to be (well, in addition to "performance tester," of course).

Honestly, just because you don't fit squarely into a particular school doesn't mean no other individual, group, or organization does. And even if the ideas, practices, and values of various schools have cross-pollinated to the point that no individual, group, or organization could reasonably be classified as exclusively belonging to a particular school, that does not negate the fact that some test managers want the authority to say, "This software doesn't ship until I say so," and others would quit before making a decision they feel belongs to an executive, for example.

Furthermore, when I first came across the schools, four were identified. Now, most folks talk about five schools. No one (at least no one credible who I am aware of) claims that these are the only possible schools of thought or that schools won't evolve over time. So, before you attack the notion that there are differences and diversity in our field, consider the possibility that you may actually share a set of ideas, values, and preferred practices with a bunch of other people that might just be the next school to become widely recognized.

Some people think that by naming and identifying schools of thought it encourages proponents of a school to act superior while vilifying the proponents of the others.

I have trouble believing that giving something a name causes some people to act superior or to attack others. History has shown us that some people simply think they are better than people who think, look, or act differently than themselves and that some people go so far as to attack, oppress, or (in the most vile cases, such as the ones leading up to World War II) eliminate people whom they see as different from themselves.

The fact is differences of opinion exist whether we name them or not, and I'm pretty sure that history has demonstrated that if you want to get past the "us" vs. "them" attitude, it is far more effective for "us" to learn more about differences between "us" and "them" than for "us" to try to eliminate or convert "them." And I'm also pretty sure that the first step in learning about one another is acknowledging one another's existence.

At the end of the day, if you want to debate whether the biases inherent to having testers fully integrated into the development team and reporting to the lead developer is more or less risky than the inevitable blind spots that result from completely isolating the test team from the development team when developing software for a regulated medical device, I'm all in. Interestingly, that one could find people to engage in such a debate demonstrates that there are differing schools of thought in software testing, and it suggests to me that naming and characterizing those schools of thought can only serve to help all testers and the organizations they serve make better decisions and recommendations about what ideas and practices are best for them -- at least for now.

----------------------------------------
About the author: Scott Barber is the chief technologist of PerfTestPlus, vice president of operations and executive director of the Association for Software Testing, and co-founder of the Workshop on Performance and Reliability.


This was last published in December 2008

Dig Deeper on Software Testing Methodologies

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchMicroservices

TheServerSide.com

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

DevOpsAgenda

Close