Ask the Expert

How to thoroughly test a website without automated tools

How we can make our website perfect without using automated tools? My company is using only manual testing technology. Is it possible to make our website reliable and secure when we are using only manual testing technology?

    Requires Free Membership to View

Hello. You have a good challenge to tackle but it might be a more difficult challenge to have a tool and not have sufficient training or staff to use the tool. You might consider starting with a subtle but important mind shift. I don't know of any perfect websites. I encounter issues even on high volume production websites that I frequent. Using the word "perfect" can stifle the way you think and affect your approach as you try to accomplish work that feels perpetually out of reach. I suggest replacing the word "perfect" with the phrase "a highly-functioning good website" so that the work you're trying to accomplish is achievable.

Manual testing doesn't mean you can't cover significant functionality. I don't know how large the website is that you're testing or how large your team is but I've been the sole tester for a couple of websites of considerable size and found ways to make the workload manageable. Many times a website will feel huge to test because there may be a volume of pages that display products or content, but when you assess the functionality of the pages there are often not as many pages with unique functionality as it first appears.

Here are a few tactical approaches to consider as you design your testing efforts -- these are not in any particular order intentionally, as you'll need to think through each and then revisit how these tactics work together.

Risk analysis. Web analytics. Priority levels. History. Exploratory testing. Issue tracking.

A risk analysis is a great step in trying to determine what's important. I've found a risk analysis when shared with stakeholders might bring tough topics out into discussion as we hash through what is a risk and what ranking we might assign to each risk. Sometimes the risks are obvious and sometimes I've uncovered surprises just by hosting the conversation about risk.

Explain to project stakeholders that not all testing will be addressed but that testing will follow the results of the risk analysis. Both of these facts are important and as obvious as it might sound to a tester, this isn't always obvious to project stakeholders. Risk analysis drives priority in testing. Sometimes people -- upper management and stakeholders especially -- will falsely believe all testing will somehow be accomplished. It is best to communicate that testing will likely never be done but will be executed in order of priority and that a risk analysis is not just a process but an important way to determine that priority order.

Browser analytics. What do customers do on the site? As much as people may find Web analytics dull, I don't. Weblogs can give you lots of information about what takes place on the website you're testing. Common navigations, most frequently purchased items, typical session length, number of pages viewed, browser and O/S usage can all be learned from weblogs. I design testing around usage. Weblogs can give you those directional signals.

These two tactics -- assessing risk and then knowing what actually happens on the website help can help guide what testing needs to be covered. Once the workload begins to become larger than you think there will ever be enough time to address, it's time to plan for reality. Several times now I've designed testing in priority bands or levels. What's essential to be tested for every release, even a small release when I've been assured no code change could have impacted X where X equals essential functionality, or functionality that has been assessed high in terms of risk. For instance, on an e-commerce site, the shopping cart is always at the top of risk and priority. Along with cart processing, typically credit card processing and protecting customer data fall into that same high priority band/level. Identify areas that must be tested for each release and learn what the minimum amount of time and staff it takes to plow through that workload. I also like to keep project stakeholders aware of what the bare effort looks like to get through a release.

The number of priority levels or bands that you define may vary. I like to keep things simple and work with one to four levels (even fewer if I can). Level one is must, level two is important but not critical, level three is good to cover and level four is "everything." When a release involves a patch in a certain area of code, that affected area becomes its own high priority.

History. I like to review defects found and look for similar issues on subsequent releases. People often make similar mistakes and bugs found in one area will likely appear in other areas. I shift testing to cover conditions I suspect based on my experience with the product and the developers. Once some areas of code have "tightened up," I shift my test ideas again. This is one reason exploratory testing works so well. And a core reason I don't believe in writing test scripts: Product changes and bugs move with the product. Here's an example based on experience -- credit card processing of expired cards. Once I found multiple bugs in this area and so this area remained high on my list for multiple releases. Once the issues had been cleared, I still executed a test or two because it was a high risk area but I downshifted how much time I spent on this area and moved onto something else. Historical product knowledge can be a great help in planning testing.

Issue tracking. As your website continues to roll out releases into production, I would keep a healthy list of what you're not accomplishing -- which is often what we don't enjoy talking about. But it is best to discuss where you feel risks may not be addressed. I would share this list with your stakeholders so that they're not lulled into thinking you've got it all figured out. I also like to keep in touch with customer support to know what issues have been found in production. You might alter your testing based on this feedback. You might be able to identify to your stakeholders the areas that could be covered with more time, staff or a tool.

Software testing resources:
Software assurance and risk management

Improving software testing skills and manual vs. automated testing

Prioritizing software testing on little time

This issue list will be your list to help grow your team and advocate for potential automated tool purchases. It isn't an excuse list or a list to hide behind, it's a reality list. I've never met a product that I was able to test every condition I could think of, but I have worked with many products and websites that worked well under production use. What doesn't get tested is a potential business risk and those risks should be discussed in ongoing dialogs.

If you feel you have more ideas about what to test than there are hours in the day, then you're likely viewing the website with a good inquisitive eye and some healthy testing ideas. Bringing that large, looming cloud of endless testing into something manageable is the next step.

This was first published in March 2008

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: