Improving problem resolution through automation

A Forrester study found that problem resolution is inefficient at most organizations. By automating the process companies can solve problems faster and cut costs, Doug Laney says.

Doug Laney, BMC Software
Doug Laney

Have you ever pondered the time and energy your development team spends on documenting, recreating and attempting to resolve software bugs and reported issues (i.e. "application problems")? If so, you're quite unique. Those who focus squarely on the effort of writing code should consider paying closer attention to the major time-sink of application problem resolution and how it affects their application release schedules, quality, functionality and ultimately their company's bottom line.

The decades-old ingrained process for manually and iteratively resolving application problems dramatically stymies the productivity of development organizations, yet most executives have little to no understanding of the extent to which this commonly accepted process affects their IT teams and their businesses. This was revealed, however, in an eye-opening Forrester Consulting study, commissioned by BMC Software: "The Business Case for Better Problem Resolution." Forrester Consulting conducted anonymous interviews with over 150 application development managers and executives throughout North America in which it elicited rare insights into the manual steps and cumulative hidden costs associated with the process of application problem resolution.

The study concluded that for most organizations, problem resolution is a highly inefficient process. Developers expend an alarming amount of time -- nearly a third of their work day -- identifying and trying to recreate problems that either 1) they discovered during unit testing, 2) were submitted by the pre-production test/QA team, or 3) are escalated by application support. Testers are bogged down similarly with documenting problems that they encounter and by the frequent back-and-forth communication with the development team about particular problems.

When testers spend time manually gathering problem information and documenting problems, they are no longer focused on uncovering application issues prior to release.

The end result: Problems take far too long to document and resolve -- much longer than management realizes. According to the Forrester Consulting study, it takes an average of six days to resolve a single application problem, with 11% of problems taking more than 10 days to resolve. Of course, this varies widely by the nature of the issue and the specific application.

Regardless, this excessive amount of time can create a chain reaction of resultant business issues. The time spent identifying and trying to recreate a problem alone can cost a good deal of money, and that's if the problem can even be reproduced. Additionally, management should tally the increased costs of development resources, reduced IT team productivity and disruption of revenue-generating activities. Then there are the soft costs: reduced customer satisfaction from long time-to-resolution cycles, slower time to market, quality versus functionality tradeoffs, as well as damage to the company's brand if word of a major production issue hits the streets.

Unfortunate, uncomfortable and unprofitable tradeoffs
When developers are distracted and bogged down by trying to identify the root cause of a problem, they are no longer focused on core development activities that truly add business value. And when testers spend time manually gathering problem information and documenting problems, they are no longer focused on uncovering application issues prior to release. This drain on resources results in unfortunate and measurable tradeoffs between release dates, software stability, software performance, software usability and software functionality.

Think about how often in your organization release dates slip, planned features are deferred, and even known application issues are allowed to be released into production. The more inefficient the problem resolution process is, the more painful, visible and costly these tradeoffs become. Imagine the impact this can have on customers, shareholders and business partners if they have little confidence in promised delivery dates.

Dude, where's my code?
While developers are busy trying to write new code, they are constantly bombarded by application problems from at least three sources. First, developers often discover unexpected application behavior as they're coding. In addition, developers must put aside their coding to resolve application issues discovered by the test/QA organization that may or may not be related to something they coded. And finally, when issues are escalated from application support groups, it's usually a drop-whatever-you're-doing situation to deal with a very unhappy customer.

More information on automated processes
The benefits of keyword-based software test automation

Automating user acceptance test cases

Automating tests vs. test automation

In each of those cases, new development grinds to a halt until developers can determine the problem's root cause (or worst case, a way merely to make the symptom go away) and then repair it. Any developer will tell you that fixing a problem is easy once the root cause is determined. Thus, ineffective problem resolution processes drive down productivity for developers in particular. Beyond impact to the developers and new application logic, there are testers, help desk engineers, operations managers, IT executives and end users who are often pulled away from core responsibilities when application problems go unsolved for too long.

Are your application testers prolific documenters or prolific testers?
The reason testing costs run so high is inefficiency in the core testing process. First, according to the Forrester Consulting study, it generally takes an average of an hour to create just one problem report. That means the average tester can only document, let alone uncover a measly eight problems in any given day. The problem report they create typically requires manually gathering and documenting information such as the following:

  • Written description of the problem
  • Steps to recreate the problem -- every click and keystroke
  • Screenshots of the application at each step leading up to and including the problem
  • System and environment information
  • Dumps and snippets of any available server and application logs

This time and expense adds up quickly. The more time testers spend documenting problems, the less time they have to discover them, which means either more application problems go unnoticed before the code is released, releases must be delayed or testing teams must be expanded.

The root cause of user angst
A primary way customers evaluate software vendors is on how promptly they resolve reported issues. If a problem is serious enough or festers long enough to disrupt the customer's business or impact its own employees' productivity, a vendor can to lose future business, incremental revenue such as maintenance renewals and related services, and invaluable customer referrals.

As in the case of service-level agreements (SLAs), some costs are even more immediately felt. Increasingly, software vendors take a direct hit to the pocketbook for violating performance, up time or other agreed-upon service metrics. Longer-term, endemic customer satisfaction woes can cause ill will that may irreparably harm a vendor's brand. Even when your "customers" are end users within your own company, the IT department risks a tarnished image and business performance can suffer when applications are buggy and/or service levels aren't met.

Shifting from manual to automatic problem resolution
Few companies have an automated way to collect detailed, synchronized information in a meaningful way when an application problem arises. This results in help desk representatives spending time eliciting anecdotal evidence from users about their experiences, product support piecing together clues from disparate hardware and monitoring systems, and testers left trying to record their steps to determine exactly which build they have tested.

All of the effort that goes into pulling together that data often does not prove adequate enough to solve the problem. Developers receive disconnected, unsynchronized bits of information from server logs, live conversations and other sources. But that rarely provides the context needed to identify the root cause from among the countless elements involved with application behavior. From this incomplete information, developers must then recreate the problem. That is easier said than done. There may be numerous differences between the development, test and production environments and the environment at a customer site. Thus, it is not surprising that those interviewed by Forrester Consulting reported that on average, 25% of problems are not reproducible.

This iterative, haphazard and time-consuming process could be cut down dramatically with an automated problem resolution solution that captures and collates detailed information about the application and environment at the time the problem occurred (not afterward). Yet because this manual process is so ingrained and overlooked in most development organizations, management unconsciously maintains the status quo.

Calculating the return on automating problem resolution processes
Once management teams assess the true costs of inadequate manual problem resolution, it becomes fairly straightforward to justify investments in an automated problem resolution solution. Today's solutions for automated application problem resolution enable both developers and testers to maximize their value to their organizations by simply coding more and testing more. Benefits to a company are myriad, but a basic efficiency return on investment (ROI) calculation is straightforward.

As the Forrester Consulting study confirmed, developers spend an average of 29% of their time on problem resolution, while testers and support personnel gather and communicate over six distinct pieces of information on each application problem. So, for a team of 100 developers, 50 testers and 50 support engineers that leverage an automated problem resolution shown to improve developer's problem resolution efficiency by 50% and test/support documentation efficiency by 75%, the savings can be significant:


  • Developer rate: $100,000 per year
  • Tester/QA engineer rate: $50,000 per yhear
  • Test/QA engineer submits six application problems per day
  • Support engineer escalates two application problems per day
  • Developer finds root cause of problem 50% faster via automated, comprehensive, synchronized application problem documentation and fewer problem "round trips"
  • Test/Support engineer documents problems 75% faster (1 hour vs. 15 minutes) by automatically capturing complete, integrated information about the application problem as and where it occurs

ROI calculation

By merely by automating its application problem resolution processes, this moderately sized application development team can reallocate over $3 million to develop new applications or functionality, improve quality or release applications faster. And these are only the hard savings.

The Forrester Consulting study revealed that nearly one-third of managers grossly underestimate the time their teams spend on application problem resolution. However, there is no question that the inefficiency of current methods is causing a huge burden. Ultimately, once businesses make the connection between problem resolution and the hard and soft costs associated with lost development cycles, it's hard not to get on board with a more efficient means of identifying and resolving application problems.

About the author: Doug Laney is director of customer solution strategies at BMC Software.

Dig Deeper on Topics Archive