Exploratory testing is widely used and often misunderstood. Managing exploratory testing requires an overall team effort, as well as a careful selection of the best testers for in-depth discovery and publication of defects. When used correctly, exploratory testing can yield meaningful returns in the quest for the best possible software quality. Table of contents: How to manage exploratory testing Who should perform exploratory testing...
What to leverage from exploratory testing Final testing tips Exploratory testing tutorial, part 1
How to manage exploratory testing
Managing team members who engage in exploratory testing is, in some ways, no different than managing any other testing activity. You can lead by example -- by participating in the exploratory testing effort -- or you can manage by delegating responsibility.
What becomes more problematic during exploratory testing is how to manage what is being tested, how thorough the testing is, what defects are detected, and how much risk remains within the application space in terms of existing and undiscovered issues.
Agile development practices provide some insight and guidance into managing an exploratory testing effort. In both cases the activity is more adaptive than it is predictive, so any management techniques applied must be consistent and people-centric.
In terms of testing status, a periodic team standup meeting seems to work best. The scheduling or frequency of these meetings should be dependent on the velocity of the ongoing testing effort: the greater the effort, the more frequent the meetings. These standups should be supplemented by meeting minutes and, when required, formal status reports.
To track what is being tested and by whom, the starting point should be a simple functional decomposition of the system leveraged as a testing checklist. If required, this can be developed into a simple matrix that tracks testing progress against functional areas of the application space. This can become a more formal traceability matrix by including a more specific inventory of the testing intent within each functional area -- a list of test case names. The danger is that the exploratory testing effort begins to move from being adaptive to predictive and therefore less exploratory. Depending on the current needs of the testing organization, this may or may not be appropriate, but it is important to be cognizant of what is actually occurring within the testing process.
To the top
Who should perform exploratory testing
Informal exploratory testing should be performed by everyone involved in the deployment of the application, including business analysts, developers, testers and finally end users. Exploratory testing is an excellent way to learn both the strengths and weaknesses of the application space and the business needs being addressed by the application. The larger the community exploring and exercising the application space, the more likely that defects and issues will be discovered before the application reaches production.
Formal exploratory testing, which measures the production readiness of the application, should be performed by skilled context-driven testers. Exploratory testing is a creative context-driven activity that should be focused on finding defects and issues within the application¸and then publishing all detected defects. Resources engaged in this type of formal exploratory testing need to be skilled testers, but more importantly, they need to be experts at the craft of testing.
How do you identify the skilled testers within your organization who are potential candidates to lead any exploratory testing? This requires a combination of skill, experience, aptitude and passion for testing. Key characteristics of an effective exploratory tester are:
Passion for testing
- An exploratory tester must have a passion for the art and science of testing because any rigor applied during this type of testing comes from within the tester.
An inquiring mind
- An exploratory tester must think outside the box and also approach the challenge of testing from all angles (business, development, technology, governance, etc.).
A skilled test designer
- An exploratory tester must be an effective test designer, someone who systematically explores and attempts to break the application space.
A skilled observer
- An exploratory tester must notice not only any obvious defects within the application space but also more subtle variances in behavior that could be a potential defect. They must function as an investigative reporter.
A skilled toolsmith
- An exploratory tester must be a skilled toolsmith, a tester who creates collections of tools to expedite the investigative process.
An excellent communicator
- To communicate findings, an exploratory tester must be an effective communicator. Since there is no test script, the only guidance provided to explain the nature of a detected defect and how to replicate that defect comes directly from the tester.
- A tester must be able to capture the intent of the exploratory test in terms of a more structured or scripted test case when exploratory testing is being leveraged to create these types of artifacts.
To the top
What to leverage from exploratory testing
I stated earlier that exploratory testing is part of the testing continuum that stretches from very informal exploratory testing to formal testing scripts that treat test cases as intellectual property that are created, executed and maintained to a specific standard. Most testing engagements -- certainly larger, more complex engagements -- call for a mixture of testing approaches, leveraging the strengths of each to harvest the most value from testing efforts. Let us assume we have access to all the resources required to take full advantage of each. What could the testing landscape look like?
Assuming there is little or no existing documentation on the application space and we are responsible for both functional and system testing, what are our options?
Begin by performing informal exploratory testing of the application space. During this process, capture the basic functionality of the application by formulating a functional decomposition of the application space within a test management tool or spreadsheet. This provides a definitive target to begin working against. Then verify this functional decomposition of the application space with the business, development, and production support teams, and make appropriate adjustments.
Now use the functional decomposition as a checklist to assign skilled testing resources against particular areas of the application. During their first sweep through the application space, the testers should focus on four primary goals: learning the application space, capturing test case names for future reference, detecting and publishing defects and confirming the functional decomposition of the application space.
Once the first formal exploratory testing sweep is complete, size the ongoing testing effort. Based on this information, determine what additional testing processes will be required to meet testing velocity and software quality goals. Let us assume that this will be an ongoing testing effort involving a large and complex application space under tight time constraints, which is becoming the norm within the world of software development. What additional tools and techniques should we use?
Leveraging the information gained from ongoing formal exploratory testing efforts can help create an itemized list of test cases that should be captured as formal testing scripts and automated using a keyword-based test case design and automation techniques. The test case inventory to be formally scripted and automated should be based on two key selection criteria: application risk, and time to test. The concept of application risk is probably more familiar than time to test. Time to test simply refers to the resource hours required to perform the test. When looking at using a mixture of testing approaches this should be one of the main criteria for automation: How do I free up my skilled testers to perform more exploratory testing?
The key here is not to dispose of one testing technique or approach for another. The key is to select the mixture of tools, techniques, and skills that create the greatest opportunity to reduce production issues and improve the overall quality of the product. For example, I would never recommend that all testing be automated. There is always room for improved testing coverage and exploratory testing is one of the best ways to discover these opportunities.
To the top
Final testing tips
Exploratory testing is an iterative process of learning, test design, and test execution. It is a context-driven adaptive approach to software testing. When delivered by skilled testers it will detect a substantial number of defects in a short period of time, arguably more than any other testing approach, especially during the first couple of rounds of testing.
For larger complex testing engagements, exploratory testing is often not enough, or certainly becomes less cost efficient over an extended period of time. This is when other testing approaches and techniques can be used to leverage testing resources, automation tools, management tools, and other technologies to reduce the weight from the testing process. If testing is a continuum, then exploratory testing is the leading edge of that continuum, and is a much more productive testing effort that helps to ensure overall product quality.
To the top
David W. Johnson (DJ), is a senior test architect with more than 23 years of experience in information technology across several industries. He has played key roles in business needs analysis, software design, software development, testing, training, implementation, organizational assessments and support of business solutions. Johnson has developed specific expertise over the past 15 years on implementing "test ware," including test strategies, test planning, test automation (functional and performance) and test management solutions.