Headline: “Google admits buzz social network testing flaws.” The same head was covered on MSNBC, The Business Insider, and USA Today. Regarding the mess, software testing authority James Bach said publicly :”These problems with Google Buzz could have been averted using the most basic kinds of critical thinking!”
Here’s the issue in a nutshell: Google Buzz is a new social tool that allows you to share your opinions and status with other users. Because it integrates with Gmail, Buzz knows to whom you send email the most. Buzz also had an added feature that automatically picks your “friends” — those with whom you share — out of the box for you. It also made your communication stream public by default.
The automatic friending is no problem for a lot of people; but what if you are having a legal dispute or custody battle, don’t want the boss to know that you’re interviewing with another company or just have two in-laws that don’t get along very welll?
Google Buzz — and, arguably, Microsoft Vista before it — actually represents a unique kind of testing challenge, one we don’t deal with much in the industry. You see, Buzz actually works. No, that is not a typo. Buzz works according to it’s specifications.
The problem is, the specifications were wrong, objectively wrong, in that they were not in the public’s interest.
I’m certain the folks at Google tested the heck out of Buzz. They probably had test scripts that exercised every possible feature, and it all worked per spec. I bet they had metrics and greenbars all over the process.
But they had the wrong tests.
You could claim this was a product management failure. After all, the software did what it was supposed to do, but “what it was supposed to do” was the wrong thing. It is possible, even likely, that in some meeting a tester or developer asked the question:”Do we really want all thing information to be public?”And got an answer that was a resounding, “Yes!”
So it might be more fair to say that this was a systems-thinking flaw. It expands the idea of testing, beyond “conformance to specification” and into another land, one of “fitness for use.”
A test for fitness for use needs to do more than “requirements traceability,” it needs to ask very tough questions about what the software does and if it does the right things? Those are questions of quality. While I’m reluctant to blame “poor testing” on a product management decision, certainly more testing might have reduced the risk for a bad rollout. Even the folks at Google agree that, in hindsight, they should have had a larger, more public beta.
As social media continues to become more integrated into our daily lives, and mashup platforms that combine Facebook and Twitter, etc., become more common, I expect we’ll have more testing challenges of this nature.
I wonder what will happen to the profession of software testers in the years to come. Will we bury our heads, claiming: “It worked per spec. It’s a security flaw. Talk to project management”? We will we might expand our view of testing to include critical thinking, as an investigative approach that can not be trivially scripted into clicks that “prove” the specification “works” or easily automated away? Only time will tell.
For right now, we should all buckle up; It’s going to be quite a ride.