I’m in Cambridge, Mass., for the the Software Test and Performance Conference this week. After Wednesday’s performance test workshop, Thursday was the first regular conference day.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
U.S. Air Force Colonel Mike Mullane — now retired after also serving as a space-shuttle astronaut — gave the opening keynote. Mike’s talk was on fundamentals of teamwork. He began by taking us through what happens on the day of a shuttle launch: Suiting up, strapping in, countdown and liftoff.
After explaining the tremendous pressure and risk of a shuttle mission, Mike asked us one question: If you were on a project like that, what kind of team do you want to be working with?
The best, right?
Mike talked about these three fundamentals of teamwork:
- Normalization of deviance;
- Responsibility; and
- Courageous self-leadership
I’ll tell you about one of them, “normalization of deviance.” Yes, Mike has a way with words.
To start with, Colonel Mullane reviewed the tragedy of the Challenger Accident, when O-ring failure causes the destruction of the shuttle. O-rings are basically rubber rings; they create a seal or barrier. You’ve probably seen them on your faucet. In the case of your faucet, they keep the water from leaking, but in the case of the Space Shuttle, they prevent the heat of the engines from leaking out from where the engines connect to the rest of the shuttle, instead forcing the heat down the hole in the bottom.
In the case of the Challenger accident, the o-ring failed, the heat leaked out, the section of the booster melted, this changed the pressure situation and the rockets, essentially, vaporized. This directly resulted in the death of the entire crew, including four astronauts Mullane had trained with directly.
How did this happen? Mullane used a fancy term: normalization of deviance. It basically means taking shortcuts, often under intense schedule or budget pressure. The first time you do this, you probably have no negative consequences. This leads to what Mike called “false feedback,” or that the absence of something bad happening means it was safe to do so.
Eventually the shortcut becomes the norm.
Now the original safeguard was put in place for some reason. Ignore it long enough, and eventually, you’ll have a “predictable surprise.” In almost all cases, the predictable surprise is not good for the team. Colonel Mullane refers to the Challenger accident not as an accident, but as a predictable surprise.
Then he pulls out documents. It turns out that in 14 of the 24 missions before Challenger, the contractor who recovered the solid fuel boosters from the water inspected the boosters and found critical or urgent problems in the o-rings. Colonel Mullane read multiple lettters from the contractor to NASA urging immediate action. One of those letters predicted the failure of the shuttle to the mission number.
The problem? Mission One was a success. So was Mission Two. For that matter, NASA had promised congress 26 missions a year, massively decreasing the cost per mission.
It’s really hard to ground a shuttle fleet when you promised the American people to put up a shuttle every two weeks.
Instead of grounding the fleet, NASA tested a damaged o-ring under pressure, and it worked, so NASA granted a waiver.
This subtle change meant that something intolerable had become expected. To avoid normalization of deviance, Mike recommended that we:
- Recognize we are vulnerable;
- Yes, plan the work and work the plan, but not blindly; and
- Maintain situational awareness.
By the time Mike finished, I found myself tearing up a little. And the talk only got better.
We software testers may not be flying to the moon, but I submit there are still a few lessons we could stand to learn from the astronauts.