Conferences are an opportune time to learn, network and just have fun. As inspired as I was by the many talks at Agile Testing Days about how we learn and what makes our teams jell, I also took away good ideas from the practical, technical sessions. In this second part of a two-part series, you’ll find out more of the many insights that were shared at the conference.
Mike Scott explained how his team evolved the Selenium Page Objects concept to provide automated tests that were easier to maintain and understand. In addition, the tests are flexible; they are able to continue even when an unexpected result occurs.
Mike’s team developed a site that surveys patients to evaluate their potential health risks. They used detailed personas to help flesh out requirements and tests. The persona named “Boozy Bev” was single despite her best efforts, and quit smoking when she got pregnant. “Perfect Peter” ate right, exercised and was destined to live to be 120. Thinking about the application from these different perspectives helps ensure you accommodate unanticipated user scenarios.
The test automation efforts Mike described were especially impressive given that his team got good automation coverage of a legacy product via the GUI layer in about a month. Obtaining a safety net for legacy code so quickly is a huge win.
A balanced test strategy
Another practical session I attended was Anko Tijman’s presentation on a balanced test strategy. Anko pointed out that we need not only potentially shippable software, but maintainable software. He has found that risk is best mitigated by covering five different areas: test cases with user stories, unit and integration tests, non-functional tests, exploratory testing and customer acceptance.
Anko drove home his points with a couple of great videos of “potentially shippable products” (see his Prezi slides for more). His presentation focused on the purpose of the balanced test strategy, reflected in the Agile Testing Quadrants: there are many different reasons to test, and different purposes of tests. We need to keep this in mind so we don’t get too wrapped up in, say, functional testing, at the expense of other critical types of testing.
Gojko Adzic’s talk on “Five Key Challenges for Agile Testers Tomorrow” took us back to the idea of visualization that earlier speakers discussed. His Agile Testing Donut provided an excellent example of visualization – if you had two seconds to eat, where would YOU take a bite? Gojko started visualizingquality.org to help our community find good ways to represent quality and risk and guide our testing and coding approach.
Each company has to decide what level of risk is acceptable to them. If you deploy multiple times a day, you need new testing strategies. For example, you could test only the areas which changed, implement strong separation of services, or “test in production,” releasing to only a small part of the user base and responding quickly to errors. We can deliver business value faster, but only if we accept the risks that go with speed.
Gojko also touched on learning. One of his most quotable quotes was something like this: “Give a person a script, he can test for a day. Give him a record/playback tool, you screwed him for life.” Testers can help programmers learn how to do exploratory testing. Teams have time for all this learning if the programmers are writing maintainable automated test code along with test-driven production code, a point Gojko made in his STAREast keynote last spring.
Gojko was voted the Most Influential Agile Testing Professional Person in an award program sponsored by Agile Testing Days. His leadership in finding ways to visualize quality and risk, and practical ways the whole team can deliver high-quality software.
Like most conferences, the most valuable discussions occurred during the excellent long lunch breaks and evening social events. Additionally, there was an entire “collaboration day” with Open Space, facilitated by Brett Schuchert, and the Test Lab, produced by James Lyndsay and Baart Knaak.
The Test Lab actually started up a day earlier than advertised, in the Chill Out area. I learn a lot by watching other people test, though it’s also fun to bug hunt myself. Later in the day, James and Baart set up all the Test Lab equipment in an official room, which was soon occupied by an unofficial event. The Potsdam Agile Testers Session was organized by Jean-Paul Varwijk and Huib Schoots, who arranged to rent the room for the evening. About twenty of us went in together for pizza (and were treated to beer by Jim Holmes of Telerik) to fuel our own discussions on testing. One participant, Rob Lambert, took all that visualization talk to heart, and mind mapped our discussion points.
I followed up with my own mind map of what I’ve been thinking about since attending the conference:
Making tacit knowledge explicit
Michael Bolton started off his keynote with some memorable one-liners, such as, “When a manager asks you to show him your test case, ask to see his management cases”, and “When a manager says you can lower the cost of testing with test automation, ask him if he can lower the cost of management with management automation.”
A more serious point Michael made was that the first problem a skilled tester solves is her role on the team. Earlier sessions made me think that this may be a problem for the whole team to solve.
Michael differentiated tacit knowledge from explicit knowledge. Some tacit knowledge is relational – it hasn’t been described, but could be. Another type of tacit knowledge is somatic, knowledge embodied within a human body, like riding a bike. We can program a machine to ride a bike in a lab, which makes the knowledge explicit. But we can’t yet program a machine that can ride a bike on busy city streets. The problems encountered aren’t only physical, they’re social. This tacit knowledge that resides in social and cultural groups is called collective knowledge.
We need to make tacit knowledge explicit in order to teach exploratory testing. It can’t merely be described, it must be practiced. Gojko urged testers to teach programmers exploratory testing, but Michael’s talk reinforced my thought that this isn’t an easy task. In my experience, pairing and testing dojos are good ways for people to learn exploratory testing by doing it alongside more skilled practitioners.
Michael listed many skills that testers need. Linda Rising observed that he was describing patterns, and we should write patterns for exploratory testing. Markus Gaertner reminded me via Twitter that the Agile Alliance Functional Test Tools committee has been working on testing patterns for more than a year, and I hope we will continue this effort.