olly - Fotolia

CAST 2015 ends with great debate

Matt Heusser offers a wrap-up of the CAST 2015 software testing conference, with a summary of several compelling keynote presentations.

After an intense week at the Association for Software Testing's CAST 2015 conference in August, it's hard to know where to begin. Let's start with the software testing conference keynotes.

With a theme like "moving testing forward," I expected Karen Johnson, director of mobile quality at Chicago-based Orbitz LLC, to present strong, compelling visions for what the world might be in the future. Boy, did I call that one wrong.

She began her presentation with her personal 30-year journey into testing (Figure 1), and then gave advice to the audience about next steps and what the future of software testing might look like for them. Here are a few of my favorite quotes:

On quitting: "I tried performance testing. I didn't have a strong math background. I wasn't good at it. Sometimes I transpose numbers … I gave it up. Not good at something? Give it up. Figure out what you are good at. Do that."

Karen Johnson, director of mobile quality at Orbitz
Karen Johnson, director of mobile quality at Orbitz

On bad bosses: "Try really hard not to work for people you don't respect. If you don't respect the person you are working for, the relationship will likely go sideways. They likely don't respect you either; they won't advocate for you when you aren't in the room and don't know that you need support."

On pressure: "If you come in the field and you can't handle stress, and you aren't an adrenaline junkie, and you don't learn how to handle stress -- don't stay."

On work-life balance: "Balancing and juggling never ends. Remember that when dealing with someone else who isn't [pulling their weight] -- the other person may be balancing and juggling, too."

Ajay Balamurugadas, staff software engineer at MaaS360 by Fiberlink, an IBM company
Ajay Balamurugadas, staff software engineer at MaaS360 by Fiberlink, an IBM company

Ajay Balamurugadas, staff software engineer at MaaS360 by Fiberlink, an IBM company, gave a keynote at the software testing conference the day after Johnson. His main point was that we create our own future. And frankly, he spent most of his time talking about learning opportunities, including everything from test competitions, books about thinking and reasoning, Software Testing Club -- the software test chat room that I help administer -- and even Facebook games that teach critical thinking and investigating. My personal takeaway was that Ajay's only promise of the future was that you could achieve your dreams if only you put in hard work and developed skills.

I first met Balamurugadas in 2009. He won a scholarship to earn a couple of hundred dollars to purchase a membership to an online organization. The scholarship was needs-based, and I was pleased to fund it. Six years later, at the age of 30, he was delivering an international keynote on software testing. Hard work and skills development seem to be working for him.

The great debate

Probably the most on-theme talk I experienced at the software testing conference was the debate between Jeff Morgan, co-founder and CTO of LeanDog, a Cleveland-based Agile consulting and studio company, and Henrik Andersson, CEO of House of Test, Europe's premier context-driven test provider, based in Sweden. Both speakers displayed a different vision of the future. Andersson suggested that testing is a skill; that time spent on other complex skills would limit the amount of testing done and weaken the tester. Morgan suggested the opposite -- that testers need to code to stay relevant. With a list of 16 types of testing, Morgan suggested that only three required no coding skill, and argued that specialties on a delivery team create bottlenecks and inhibit flow. Andersson agreed that slow feedback was back, and suggested that writing code to automate software testing slowed the feedback cycle from developer to tester.

Left to right: Erik Davis, moderator (standing); Henrik Andersson and Jeff Morgan (seated)
Left to right: Erik Davis, moderator (standing); Henrik Andersson and Jeff Morgan (seated)

Instead of a vote for who "won" the debate, participants had a different challenge: to see where they needed to move, to be more technical or less, and to move their own organization forward. If you'd like to dig deeper, consider Perze Ababa's mind map of the debate.

Hallway meetings

Some of the most valuable conversations I had at the software testing conference were in the hallway. Steve Savik, Justin Rohrman and I talked about the problems with counting test cases, which are well known. I was struggling to explain why, and working through a broken metaphor -- that counting test cases is a bit like counting the bills on a table. If the dollar amounts vary, counting the bills isn't what you need; you need to count the dollar amounts to determine the value.

That means either making each test case the same length of time -- good luck with that -- or applying a sizing factor, making each planned half-hour of testing a point and using past performance to predict future behavior. The consensus with this approach was that it was "less wrong" than just counting test cases -- but the whole line of thinking had problems, and there were other methods to measure and predict coverage, such as Session Based Test Management. To put it into a tweet, we suggested, "Stop trying to do the wrong thing right-er," and it seemed to stick.


On the morning of the last day, we ran lean coffee, which is an agenda-free meeting format about testing. Most of our focus was on test education: What are the gaps in the existing test education products and what should we do next? Albert Gareev, a tester from Canada, described a game he called "BeDazzled," where one group of testers tries to create requirements so specific they can't be misunderstood -- then a second team tries to design the product that fulfills all the requirements but is not fit for purpose. The example Albert gave was a pair of scissors, implemented as 24-carat gold encrusted with jewels. Apparently, the requirement team failed to mention price. The simulation teaches several things simultaneously, including the limits of precision.

Next Steps

Tips for effective software testing

Choosing a software testing service

Dig Deeper on Topics Archive