Graeme Dawes - Fotolia
What if software testing could undergo a metamorphosis, becoming better, faster and less expensive all the while...
letting testers focus on what they excel at? That rosy future could happen, thanks to a sudden interest in AI in software testing.
Enterprise solutions provider Infostretch just announced it will offer artificial intelligence in software testing through a brand new service called Predictive and Prescriptive QA. Infostretch isn't the only option -- San Francisco-based startup Appdiff is also bringing machine learning "bots" online as testers. And dinCloud recently announced "James," a virtual robot QA.
With continuous delivery, continuous integration and DevOps as the hot topics in every software development conversation today, the pressure on testers has never been more intense. "The thing is your crew cannot keep up with the amount of testing that should happen," Appdiff CEO Jason Arbon said. "That's one reason for Appdiff. ... People can't keep up any more."
What about machine learning?
The solution is AI in software testing, or more specifically, an AI subset: machine learning. "Today there are tons and tons of test data and it's very hard for a single person to get through it all," said Avery Lyford, chief customer officer at Infostretch. "It's tons of report management now. Where are the real issues and what are the real problems?" That is where AI in software testing can come in and help sort through the noise, Lyford said.
Infostretch is offering the Predictive and Prescriptive QA product as a service. With a heavy focus on data analysis, Lyford said the artificial intelligence in software testing tool can help streamline the testing process by ensuring the right information is in the hands of the testers who can then make better decisions. The new service can also be used in conjunction with the company's QMetry offering.
AppDiff is taking a slightly different approach, Arbon said. "We're going from the end user experience backwards," he said. "AI bots can do tens of thousands of test cases versus 20 to 100 regression test cases. This plays into today's DevOps plan to iterate quickly." Using AI in software testing, companies will always know if the UI isn't working or the UX is struggling, he said.
But these aren't just any bots. Arbon, who previously worked at Google and has a background in software testing, realized a fundamental truth about applications that make bots effective testers. "Almost every app is the same," he explained. "It's the same log-in screen, most search boxes look the same, the profile, the shopping carts, there are a lot of similarities." With that understanding -- and the idea that each bot could be trained as a specialist in a single area like just the search box -- Arbon was able to create bots that were better than the average tester. "The little bots are specialists on each area of the app and while they're not as smart as a human might be, they're the best search testers on the planet." Arbon, and his colleagues who come from Google and Microsoft, train their bots to test like they did. "It's like we've created a "Google tester" in a box. This replicates what we would do with your app."
And, amazingly, there may be a silver lining in this for testers, many of whom fear being automated -- or AI'd -- out of a job. "The folks we work with don't get fired," Arbon said. "They get to hand off work and focus on doing things they're good at." Or to put it another way, it's eliminating the grunt work and allowing testers to do the human, creative things they're better at, said Paul Merrill, principal software engineer in test and founder of Beaufort Fairmont Automated Testing Services, at the Agile2017 conference in Orlando. Lyford sees it as giving testers back that elusive element of time. "We want people to be able to do complicated edge cases, not the routine stuff. This is to augment testers, not replace them."
What's coming next in software testing?
Testers, how are your data science skills?
Other places AI might take you in testing