Three keys to successful ad hoc testing
What is ad hoc testing?
I personally don't like the title 'ad hoc testing' because it implies a lack of organization or purpose, but that is the term most often applied to testing without a set of test cases or a script. Ad hoc testing is the act of testing a product based on intuition, hunch, or experience. 'Exploratory testing' is another term used for unscripted testing, but it implies more discipline and structure than the term 'ad hoc.' Whether called 'ad hoc testing' or 'exploratory testing,' my definition is this: an informally guided, interactive session during which the tester asks questions of an application and receives answers. There are three keys to successful ad hoc testing: taking a heuristic approach, letting experience guide your session and being reasonable about the questions being asked.
Heuristic testing is the art of probing an application and using the answer to select the next question (and I do believe it's an art; it is where the creative aspect of software testing really comes alive). Let's assume I am testing an iPhone flash card application. I already know the basic functionality works (displays flash cards, collects results, displays scores). Now I'd like to understand how well the flashcard display works. I start to ask questions of the application about the flashcard text display. I have a hunch there will be localization issues, and there may be some wrapping issues. I begin by building a simple card in English to ensure the display is correct. Next, I put together some simple flashcards which show German/English and Spanish/English translation, only to discover the app is unable to display umlauts and accented characters. As I continue testing, probing the boundaries of what the application can do, I continue to let the results of my previous tests dictate where my next test is going. I am then able to report to the application team some of the boundaries and limitations of the application.
Experience is the key to good ad-hoc testing. Some experience is well-known and any tester can read about it: boundary conditions, equivalence classing, etc. A lot of experience has to be learned, however. As a tester becomes more familiar with a given technology (say .JSP and MySQL), they begin to understand the limitations of applications built on that technology and they can then ask questions keyed to those limitations. Or as a tester becomes familiar with the application by working on it version after version, they start to know what areas are stable and what areas are notoriously 'brittle' (prone to defects). Also, as a tester works with the same team of developers, he or she becomes familiar with which developers write bug-riddled code. This information is all used to focus ad hoc testing.
A final point to ad hoc testing is to be reasonable about the approach. Testers can dream up, well, some pretty outrageous test cases. Looking back at my iPhone flash card application example, it's obvious that a flashcard has limited display space. Posting a defect that the first chapter of War and Peace can't be displayed is an unreasonable approach. Complaining that a Web application can't handle 25,000 concurrent logins may also be unreasonable, depending on the topology and business requirements. Keeping the tests scoped to reasonable use, and entering defects that reflect what end users might expect is the best approach. Too many unreasonable complaints about an application may result in management declaring an end to ad hoc testing, which in effect would end some of the best fit-and-finish work you can do for a project.
This was first published in September 2010