Syda Productions - stock.adobe.c

Manage Learn to apply best practices and optimize your operations.

Effective QA practices for RPA bot maintenance

Effective RPA QA ensures your bots don't go haywire. Learn how RPA bots fail, effective QA workflows to fix them, and how, when and where human testers should get involved.

Robotic process automation relies on RPA bots. These sophisticated scripts mimic the way humans interact with applications, especially ones that lack a native scripting language or APIs. But unlike humans, bots break easily.

RPA was the fastest growing category of enterprise software in 2018, according to Gartner, but the research firm also found enterprises have trouble scaling the tech in production. It isn't hard to create a bot. A worker can simply record the execution of a task to generate a bot template. But the challenge is how to ensure RPA bot quality over time.

Integration testing is a challenging and time-consuming task with RPA, because problems can sneak up in many different places, said John Cottongim, CTO of Roots Automation, which provides cloud-based RPA services. RPA demands frequent and ongoing testing as part of bot development. With a focus on bot testing, teams catch problems as they roll out, rather than discover issues when the bot is already at work.

RPA QA relies on more than just testing new bots before software goes live. "Bot management is an ongoing dance with the full ecosystem surrounding a process, in comparison to building a modular component within that process," Cottongim said.

To create a healthy, reliable and productive staff of RPA bots, software testers need to know what causes failures and inherent weaknesses in RPA, and the right workflow for RPA QA -- particularly RPA bots. Ongoing quality comes down to the development team approach as well as smart monitoring in production.

How RPA bots fail

Bots have seemingly innumerable ways to fail. However, the root of the problem is almost always a lack of human communication, Cottongim said. Static bots and dynamic humans are a dangerous mix for RPA QA.

For example, a worker might change an SAP report without knowing a bot relies on the program to complete an automation sequence. Teams can update or patch software -- such as a UI refresh for the HR portal -- without accounting for bot dependencies. Companies should adhere to established procedures and communicate changes thoroughly to avoid most preventable bot failures.

RPA bot quality problems can also stem from design and developer oversights. For example, the developer does not account for significant system latency during batch operations, or program what the bot should do when faced with bad or missing data.

Why RPA bots don't like change

In many cases, RPA bots are built based on the front-end user path. These software scripts rely on a stable UI, so that the keystrokes and mouse clicks in the automation sequence always produce the same result.

An RPA bot is susceptible to any changes in the UI or web interface, or other unexpected variables, said Kapil Kalokhe, senior director of business advisory services at Saggezza, a global IT consultancy. A bot that an RPA developer programs to visit a website and extract information from a webpage, for example, may not be able to complete its task if the webpage layout changes. The bot would become stuck until human intervention arrived. If the bot runs during off hours, it could terminate the task, leaving the team members without the required work when they return.

For QA, RPA developers should incorporate variables into the script that ensure bots adjust to changes in the path. But to eliminate this inherent weakness, developers would have to predict every possible variation. Examples of how a bot could break include:

  • Change in UI from plugins, patches, browser upgrade or screen resolution reset;
  • Bot control room restart;
  • Bot services restart;
  • Database maintenance;
  • System maintenance;
  • Expired credentials; and
  • Application maintenance.

What RPA QA workflows look like

RPA QA resembles typical software testing, but some specific considerations exist.

Prior to deploying a bot, the team should do user acceptance testing (UAT) to compare the RPA's performance to existing manual tasks. This quality check identifies any potential flaws, and ensures the bot executes as anticipated and meets the user's expectations. Requirements for UAT are similar to those in other kinds of software, Kalokhe said.  

To maintain high quality of service during an RPA rollout, bot developers can move in phases. They automate only portions of a process and continue to evolve the bot's capabilities as the users become familiar with the RPA technology. This setup is akin to software development's beta testing releases.

Some RPA development tools monitor the performance of multiple bots through a dashboard. These dashboards communicate how often the bot ran, error rates, time required to perform the tasks and other statistics. A dashboard can show when a bot has failed and requires intervention.

Dashboards also track how effectively the bots run. Based on these reports, managers can set quality standards and compare automation results to expectations. Then, these leaders can select bots to revise or revamp based on performance data.

Kalokhe at Saggezza offers these ways to maintain bot quality once deployed:

  • Analyze the error rates, to prioritize and guide revisions.
  • Regularly monitor the bot performance success rate.
  • Implement bot optimization practices using metrics like average process handling time and turnaround time.
  • Evaluate key performance indicators in a monthly review.
  • Dedicate a team member to monitor the error logs and investigate issues.

RPA software providers are continuing to improve their technology to unite automation with ever-changing workflows. "Enabling humans to work in harmony with software bots is critical to the continued growth and success of RPA," Kalokhe said. To make tools better, vendors will need to create ways for bots to learn and adapt. Flexible bots would reduce the need for monitoring and intervention.

Set developers up for success

Higher bot quality starts with development consistency.

"If you have 100 developers spread out across a company, without a central body filtering and training for quality, you have no hope," Cottongim said. Instead, centralize the knowledgebase to build bots and concentrate the development talent base around RPA best practices. This strategy will help control and improve quality over time.

Cottongim has seen little progress on incorporating testing into the bot development tools -- outside of automated code review tools that catch some glaring issues. In his experience, teams get the best result when they focus on building bots with consistency and perform frequent and ongoing tests.

Tight turnarounds can unravel QA practices. An emphasis on speed can lead to brittle solutions, ultimately turning companies away from RPA. "We are all used to spending the appropriate time and resources to find our next teammate, so why not do the same when looking to hire a digital coworker?" he said.

Dig Deeper on Software testing tools and techniques

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCloudComputing

SearchAppArchitecture

SearchITOperations

TheServerSide.com

SearchAWS

Close