At the Conference of the Association for Software Testing (CAST 2016), opening keynote speaker Nicholas Carr took...
the stage to talk about the effects of automation on people and processes.
Carr, an internationally known and controversial technology writer and speaker, is probably best known for his skepticism concerning technology. His initial work, Does IT Matter? claimed that IT couldn't possibly be a competitive advantage to any organization because everyone had the same systems and software. And later, in "Is Google Making Us Stupid?" he said that hypertext is responsible for the fragmentation of information. We don't learn anything in any depth, because we are constantly jumping from one idea to another.
But as the CAST 2016 keynote speaker, he presented a balanced and compelling take on the strengths and limits of automation. There is nothing inherently wrong with automation, but what we automated, and how users responded to automation, could lead to serious problems on learning and engagement.
Carr drew upon several real-life examples of ways that automation changes how people approach a problem. In particular, he noted the Air France passenger flight from Rio de Janeiro, Brazil to Paris, France, which crashed in June 2009 in the middle of the Atlantic Ocean, and whose main debris and black boxes weren't found for two years.
The aircraft had entered an area of severe turbulence and icing, causing errors in the airspeed indicator and disengaging the autopilot. The pilots didn't understand the situation of the aircraft, and forced it into a deadly stall by attempting to climb when they needed to stay level or lose altitude.
It turned out that the pilots lacked the experience and skill to effectively fly the aircraft manually. Once automation was no longer available to them, they became little more than novices, and made the wrong tactical decisions. Carr produced a number of similar examples where the operators lacked enough practical knowledge to know whether or not the automation was functioning correctly.
People are most engaged when they participate
He pointed out that the influence of automation on human behavior was a key facet of how tasks were automated. When people are effectively involved and engaged in a task, they will learn that task and use automation as an assistant, rather than a crutch.
Automation is effective only when people have mastered a skill to begin with. All school children today use calculators, but only after they have learned the fundamentals of mathematical processes. Without that mastery, there is no foundation with which to move on and extend their skill set. We automate routine tasks, after we have mastered them, so we can move on to more challenging work. But we remain masters of the original tasks.
Carr calls this "human-centered automation." Rather than blindly look about for tasks to automate, we should ensure that the human user or operator remain firmly in the loop. And we should seek to automate tasks that the user understands and can perform manually, if necessary.
In conclusion at CAST 2016, Carr strongly advocated not automating friction out of our lives. He said that our efforts at automation tend to identify difficult tasks or those that caused "friction" and automate those. Friction, or a moderate level of physical or intellectual effort, is necessary to become engaged in the task and eventually become an expert at it.
Whatever you may think of Carr's controversial ideas, in this talk he was both reasonable and convincing. He used the words "situational awareness" on a number of occasions, claiming that without a certain level of engagement in a task, the users or operators lacked the knowledge to say whether or not the task was being done correctly. They just assume that it is, and occasionally that assumption is wrong.
Automation, of course, is both the savior and curse of testing. On the one hand, it promises to save time and accelerate testing. On the other, it threatens to eliminate jobs and make the rest of testing dull and repetitive. At CAST 2016, Carr argued convincingly for automation of tasks where the human had first become the expert. Without that, it is impossible to tell if the automated systems are doing the task correctly.
How does CAST 2016 compare to last year?
Is automation really the future of testing?
Step up and embrace your testing career