Ask the Expert

Testing methodologies, testing strategies and testing types

[What are the] difference[s] between a testing methodology and testing strategy? What are most commonly used testing types such as smoke, monkey, sanity etc. and when they are used? I actually want to know about this with the help of an example.

    Requires Free Membership to View

Great questions! It's relatively easy to answer your first question, but yes -- the second question needs some examples to get a good grip on the topic.

A testing methodology is a tool or method used to test an application. As you listed, some methodologies include monkey testing, automated UI testing, regression testing, and so forth. Some might argue that testing techniques such as pairwise-combinatorial interdependence modeling or model-based testing are also methodologies. A testing strategy, on the other hand, is a holistic view to how you will test a product -- it's the approach you will take, the tools (and methodologies) you will use to deliver the highest possible quality at the end of a project.

In soccer/football, teams use methodologies and strategies. Some strategies include double-teaming the opponent's most aggressive player, choosing a more defense-oriented lineup, or keeping the game pace slow with a team known for rapid strikes. Methodologies frequently define how these strategies are implemented. For instance, having two mid-fielders (halfbacks), with one being more defense-minded, would allow a team to double-team a world champion center mid-fielder. Continually pushing the ball to the outside and leveraging short passes in the classic triangle is a methodology for implementing a slower approach to the game.

In software quality, the test strategy consists of a myriad of methodologies, activities, and staffing solutions. The strategy overall sets the acceptable bar and calls out how the test team will achieve that bar. It is the sum of all the inputs, in an organized plan. Testing methodologies are the different approaches you will take to testing. Currently, I'm testing a new Web site (available in late summer). The site has two major components: a UI-oriented content component for external visitors and a Web services component used by a third-party provider to interact with an internal database.

My test strategy calls for implementing two entirely different methodologies for testing this project. The content-heavy site includes a fair amount of manual UI-based testing, ensuring the quality of the content as well as the flow and lay-out of content templates. It includes (and for future regression it relies heavily on) a suite of Selenium-based tests which validate divs on the page, page headers, and successful page rendering. A base set of page tests has already been written, which checks for a series of positive and negative tests (for instance, it looks for the present of headers and footers; it also looks for 404 responses, for script errors, or for server errors on each page regardless of the URL or page content). We have also written a series of manual test cases to test the business logic on forms pages. Finally, we've inherited a set of test cases for testing the search component, because it is similar to the search component on another site we own. Eventually each page will have a set of manual and automated test cases, developed in conjunction with the page development.

The Web services component, however, uses an entirely different testing methodology. We are relying on eviware's SoapUI Pro Web services testing project (http://www.eviware.com), and we are implementing a data-driven testing methodology. Currently our first Web methods are trickling in for the service and we are engaged in writing the plumbing for data-driven testing of each Web method. Once the plumbing is in place, we can add new tests (positive or negative) simply by filling in a row of test data including input, expected response, and result. We're nearly 100% automated in our approach to testing the Web service.

Software testing resources:
How to define a test strategy

The A-B-C's of software testing models

Software testing deliverables: From test plans to status reports

You have also asked about a few test methodologies. Time doesn't permit covering each one, but let's take a look at the specific methodologies you bring up, and a few related ones as well.

  • Smoke Testing: Smoke testing is the practice of going over a new build at a very high level. What I tell my test teams is that smoke testing is used to make sure you can get at each major component of a project. Once a project is smoke tested, deeper testing can get started. It has its benefits in large, distributed test teams -- especially when the project is very large and takes a long time to install. For instance, building an application like Microsoft Office or Windows Vista requires smoke tests before a testing organization commits itself to cleaning a bunch of machines and installing a new build.
  • Integration Testing: This is the activity of taking a new piece of functionality and testing it deeply. It does not involve testing data as it passes in and out of this functionality. For instance, integration testing a VAT calculation formula would involve inputting various values and validating the output.
  • System Testing: This is the art of tying each individual functional unit together into a system and testing end-to-end. In the previous example, it might mean creating a shopping basket of various items (taxable and non-taxable), and passing that basket object to the VAT calculation engine. Once the VAT has been calculated, pass the output to a payment system and ensure the proper payment is made. It's a 'day in the life' of a piece of data.
  • Data Driven Testing: Data-driven testing is great for testing APIs and Web services -- you input a series of rows of data and analyze the results. It's what makes testing managers and IT directors feel warm and fuzzy, because data-driven tests can be written in no time and executed even faster. Tools like FITness are great for DDT.
  • Monkey Testing: Monkey testing is the art of generating random tests via automation. There are smart monkeys and dumb monkeys. Dumb monkeys randomly exercise functionality in an application -- for instance, randomly clicking UI elements or randomly inserting data, rarely with validating output. They're often used to expose issues like memory leaks. Smart monkeys, on the other hand are, well, smarter! They randomly interact with the software being tested, but have state or model information which allows them to validate the results of interaction. Some smart monkeys are so smart that they actually queue up new tests based on the results of previous tests.
There's a wide range of literature available detailing each of these methodologies, both on the Internet as well as in print format. As you read up, you'll become more familiar with the methodologies in common use today and how to apply them. SearchSoftwareQuality.com is a great place to read case studies and articles from testing experts and gain insight into these methodologies as well.

This was first published in December 2007

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: