I think your question is a common one and I think it applies to a broader audience, since many teams don't have...
formal designers. Providing usability feedback as a tester is difficult, because not only are you not seen as an expert, but also you're typically doing it at the end of a project when timelines and budgets are tight and people just aren't interested in things like "ease of use, friendliness, and effectiveness."
In the past, I've used the HICCUPP heuristic to help with simple usability testing. For the teams I work with, it can be enough to get most obvious usability errors fixed. However, it doesn't always give me the firepower I need to advocate a bug as effectively as I might. So while I really enjoy usability testing, I wanted to take your question to someone who specializes in usability.
For that reason, I sat down with usability expert Tim Altom and discussed your question with him. Tim is a former usability consultant who currently works for Coremetrics as a Web analytics consultant. He also teaches Human-Computer Interaction for Indiana University-Purdue University at Indianapolis and holds a master's in HCI from Indiana University.
"Today, most producers of high technology are far more concerned about whether the product works consistently than whether it works usably, so many projects do not have the benefit of professional usability specialists, relying for usability instead on the dogged professionalism of testers who are willing to engage in some kind of usability evaluation in addition to their usual duties."
"Although this dedication is praiseworthy, it has its problems. One difficulty is that few testers have any idea how to evaluate a product for usability. In the absence of any other standard, they rely on industry urban legends "Users won't click more than three times" or their own subjective preferences "I wouldn't like to do this, so I think the user won't, either". Neither is a good guide."
Tim pointed out that one could think of usability evaluations along a continuum. At one end, you have the "gold standard" of usability - actual user testing. At the other end, you have simple heuristic evaluation by someone on the project team. While user testing is more effective, it's also more expensive. As Tim pointed out when we spoke, "As with so much else in life, you get what you pay for."
"For convenience, the various methods can be broken down into testing techniques and inspection techniques. Testing requires time, money, and extensive planning and is consequently not likely to be applied by functional testers. Inspection methods don't require as much time, money, or preparation, and are accordingly very popular even though they're not as effective as testing."
When asked what techniques a functional tester on a project might use, Tim offered up a couple of alternatives. The first was using a cognitive walkthrough; which can be done using a strong description of the end user and a few use-case style task scenarios. "The interface is inspected slowly and deliberately," said Tim. "During inspection, the tester answers four questions at each action."
Those questions include:
- Will the users try to achieve the right effect?
- Will the user notice that the correct action is available?
- Will the user associate the correct action with the effect to be achieved?
- If the correct action is performed, will the user see that progress is being
- made toward solution of the task?
While a cognitive walkthrough can be quite thorough, it also takes a lot of time. For testers who might be working late in a project, that's not ideal. Another alternative Tim walked me through was using heuristic analysis.
"There are several publicly available sets of heuristics that can be readily applied to interfaces. It's necessary only to memorize them so as testing progresses they can be automatically applied. They're just one step above the urban legends lists. One big drawback to them is that different testers will interpret them differently and apply them differently, but they're arguably better than nothing. Heuristics may catch many of the 'forehead-slappers,' but it's inferior to testing if you want to catch the majority of flaws."
At the start of this answer, I shared a set of heuristics I commonly use for usability. If you're using heuristics, see if you can get agreement on them with the rest of the team. That will make bug advocacy easier because then everyone's using the same usability playbook.
Tim also provided a last resort for usability testing, which was to invoke the simple philosophy enunciated by Steve Krug in his bestselling book on usability, Don't Make Me Think. "If anything in the interface will make a user pause and have to think, it should be flagged for possible change," Tim said. "Contrary to popular belief, nobody really wants to think while working with a tool. Thinking is hard work in itself, and should reserved for the job, not for the tools."
Dig Deeper on Exploratory Software Testing
Related Q&A from Mike Kelly
Every software tool is individually designed to meet various needs and requirements of projects, teams and project managers. Learn what tools experts... Continue Reading
There are multiple ways performance testing can be handled on an Agile team. An expert describes the benefits of various approaches. Continue Reading
Creating user acceptance tests out of basic software requirements documents can be a daunting task. Expert Mike Kelly points out logical approaches ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.