WavebreakmediaMicro - Fotolia

Manage Learn to apply best practices and optimize your operations.

Is manual testing still necessary?

In spite of automated testing occurring from check-in to build, there is still a need for manual testing.

With automation from check-in to build, is there still a need for manual testing?

Yes, although testing can be automated from check-in to build, there is absolutely a need for manual testing. Unit tests should almost always be automated; that is why test-driven development is so effective. And although automation is extremely efficient and effective for regression testing and keeping the testing debt low in Agile methodologies, there are two areas where manual testing is key.

The first is when a product is new and is still undergoing significant change. When an application in its early release, or when it is constantly being enhanced and updated, the cost of keeping the automated test suite up to date may outweigh its benefits. It is critical that applications be stable before a significant investment is made in developing an automated regression test suite.

The second area is usability and human experience. There is no effective way of automating usability testing or testing the human experience. Usability and human experience testing requires the tester to look at the overall picture, i.e., will the user enjoy the experience of using this application?  

For example, I once tested an e-order annuity application where the application had to be transacted within six months after a spouse's death. The date of death was placed several screens into the application, which meant an agent or customer service representative would have collected over half of the data needed for the transaction only to find the surviving spouse was not eligible for the annuity. Automated testing most likely would not have found that bug.

Testing wearables and mobile devices also requires manual testing because field testing is required. Mobile devices must be tested anywhere and everywhere the user will use them. This is even more important with wearables, since they must be tested with users of different demographics.

In conclusion, although we can automate most of our testing, there are some defects that will only be found through manual testing, and therefore, it will always have a place.

Next Steps

Use cases for manual vs. automated software testing tools

How to improve software testing skills

Automation is a work in progress, manual QA is not

Dig Deeper on Topics Archive

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Not that I disagree with you, but I believe your two arguments are off. I believe there is one specific type of testing that absolutely requires manual methods, and that's where you have subjective expected results. New products are handled exceptionally well via Agile methodology where 100% automation is almost universally accepted (especially where DevOps is added to the picture). As one of the main selling points of Agile is how well it manages frequent change, I think automation does just fine here. RE: Usability testing, I cannot think of a single test that verifies or validates user experience - again, other than those with subjective expected results (I'll know quality once I see it) that can't be automated just as easily. All it takes is defining your usability tests up front and coding to them. In addition, your "field testing" argument is ludicrous. You cannot possibly be arguing - as it sounds - that one must test a wearable (for example) in one's own bathroom, in the men's room at Grand Central Station, in the women's room at a beach in Venice, CA and everywhere in between because someone might possibly use a wearable there. You have requirements (or user stories) to tell you what situations and locations are supported (and if not, you need to define them), and you have banks of mobile devices located in the cloud and emulators and - if necessary - specific environments setup where you have requirements to match them. While you'll need personnel to setup those environments (such as a group of mobile devices atop a building with a glass-roofed garden if that's what the requirement calls for), you don't actually need to go there for every single test. Automation can admirably perform such testing with less overhead per test, so unless you expect to test these situations once per device and consider yourself done, you're better off with automated testing in those scenarios as well.
"There is no effective way of automating usability testing or testing the human experience." This is what I thought of when reading the title of this question.
I feel if there is a UI there is no way around it. We have a site that around 12 users beta checked for us. I spent an hour the other day and found 4 minor glitches. The thing is, everyone is wired different and do things logically different. That is why I caught the bugs no one else did. They did not think out of the box. I tried totally stupid things and broke the application.
I think manual testers will continue to be around for quite some time. Tools developed in the past 10 or so years have helped us to become more useful or efficient, but that won't remove the need for testers. Especially skilled testers.

People have the unique ability to make decisions based on what they have learned. I can be testing something, notice an interesting behavior, and then go off and investigate that. Computers can not do that, they can only make "decisions" based on an algorithm that was given to them by a programmer. Put more simply, people can explore, computers can not.

There are some places doing mostly programmatic testing. There are business domains where that might be ok, and there are business domains where that would absolutely not be ok. It's all in the context. 
Automated testing may have bugs that can be justified using manual testing. 
But I guess, this comparison is pointless.

Thanks and regards
There's more to automated testing than just having the tests run and give us a Green light or a "tests passed" warm and fuzzy. As one who uses an automated harness regularly, I deal with two annoyances quite regularly: flaky tests (tests which fail which when rerun pass) and tests that never fail (read, we should be suspicious that they are actually doing what they should be). to that end, I will say that there is plenty of manual testing needed to, if nothing else, make sure that the automated tests are actually doing what they are supposed to be doing, much less look at areas that automation can never accomplish (such as make a judgment call as to the acceptable performance of an accessibility test).
I think, it's time to drop from definition of testing those mechanical activities that are fully automatable. That's what has been in the industries around us, why not learn from the examples?
  • Compiling is fully automatic know; no one calls it programming.
  • We don't say "automated driving", we say "automatic transmission".
  • No one says "automated accounting" but there's automated tax filing.