Arsgera - Fotolia

Manage Learn to apply best practices and optimize your operations.

How to make crowdsourced testing actually help the development process

The trouble with paying bug bounties to outsiders for testing help is doing so can create an adversarial relationship between developer and tester. Expert Robin Goldsmith offers advice.

In a famous "Dilbert" comic strip, the Pointy-Haired Boss offers to pay a bounty for each bug detected, whereupon Wally responds, "I'm going to code me a minivan." The main message, of course, is the number of bugs the developer -- hopefully unintentionally, other than Wally -- puts in the code is the major determinant of how many bugs are detected.  Testing techniques and skills are secondary. That is, the best tester in the world cannot find a bug that isn't there, and the weakest tester can't help finding bugs in dirty code.

However, the "Dilbert" strip doesn't address several other critical aspects of crowdsourced testing and bug bounties. Many organizations have used a variant, often called pelt counts, where the tester's performance is measured by the number of bugs he detects. While seemingly a brilliant management use of metrics, in fact, it's often misguided and instead can actually cause additional undesirable results.

Part of the issue is pelt counts make testers focus on quantity, rather than quality of bugs.  Such a shifted priority can divert limited test time and effort to detecting lots of often trivial bugs, rather than hunting for those that may be harder to detect, but matter more.

The bigger issue is pelt counts create an adversarial relationship. In order for the tester to succeed, the developer must fail. Instead of concentrating on writing clean code, the developer often pays more attention to preventing the tester from reporting the bugs that are there. The most common symptom is lots of time wasted arguing about whether or not something is a bug.

An effective -- but, unfortunately, seldom-used -- solution is to shift the metric's focus to the user through crowdsourced testing or other methods. In that scenario, both developers and testers are measured by the number of bugs the user encounters. The fewer bugs the user sees, the better both the developer and tester are evaluated. The fewer bugs the user sees, the better both the developer and tester are evaluated. That's a rational win-win-win.

Relatively recently, bug bounties began to be applied in a different way -- to testers outside the development process and, thus, usually with no reflection on developers. Sometimes referred to as crowdsourcing, the benefit is getting lots of eyes testing the application often in lots of unlikely-to-be-tested situations, but paying only for unique bugs that actually are detected.

Many times, those involved in crowdsourced testing invest their time with no reward, which similarly can have unanticipated consequences. Moreover, because they are unlikely to know the business domain, even a large group of crowd testers may still find only superficial bugs, while missing more important ones. So, as usual, "Dilbert" makes a very good point: Bug bounties have to be handled carefully, and crowdsourced testing may be one option.

Next Steps

Bug bounties in the Friendly Skies

Interested in getting started? You might try a third party

Bug bounties are not just for regular bugs anymore

Dig Deeper on Topics Archive

SearchCloudComputing
SearchAppArchitecture
SearchITOperations
TheServerSide.com
SearchAWS
Close