LuckyStep - stock.adobe.com

Tip

Evaluate 3 application performance monitoring strategies

There is more than one approach to performance monitoring, and each comes with its own advantages. Compare these three strategies to find the right fit for your organization.

Data centers must be monitored. But, like with many things, details tend to get tricky with what you monitor, as well as how and what you do with that data.

Some vendors will help you navigate those questions, but let's be clear: Vendors are out to make a sale. While IT professionals have always seen this complexity with monitoring, the difference lies within the monolithic vs. a la carte approach. Both have positives and negatives, and understanding all of the variables can help find the best fit for your business.

There are three basic application performance monitoring strategies:

  1. Monolithic. This approach can be a single suite of products that covers an entire data center in a single purchase.
  2. Dedicated. This option has standalone products that have defined roles and targets without a common framework.
  3. Common core with a la carte. This monitoring setup adds agents and pieces as needed at an extra cost.

Although the cost of an all-encompassing suite is significant compared to a standalone option or even an a la carte common core product, it's important to consider the long-term benefits of the asset.

Monolithic monitoring tools

This performance monitoring strategy demands a high initial investment but is usually acquired in a single purchase. The one-time cost can be critical if an organization's purchasing ability wavers year-to-year based on sales or other factors.

Monolithic maintenance cost will be higher than other options, as it contains more features. It might have more software than an organization needs. But, conversely, these tools often provide most features an organization needs. Maintenance payments on nonessential elements will feel expensive to management and those added options lead to more patches and more configurations, and require more resources to run smoothly on a larger scale. This places all the eggs into a single monitoring basket, making it a critical infrastructure component.

One of the primary benefits of a monolithic approach is that it more closely resembles a single pane of glass. Alongside the lessened efforts in training for IT and operations, to retain all data on a single platform can yield deeper insight into complex issues that bridge multiple environments into a single data center. This can lead to quicker problem resolution and detect much larger issues.

Dedicated tools

Dedicated monitoring tools usually have a much lower initial investment and lower annual maintenance costs. These tools are often easier to get approved by purchasing and management departments and the physical costs often exist under the traditional cost radar.

These tools are dedicated to a specific product or environment and consistently perform well in their specialization. However, they lack the overall view of a data center and might miss any indirect issues. This creates gaps that staff must fill to achieve that overall data center view. Also, this affects staff training and problem resolution times, as each environment can display the data differently and have vastly different support characteristics and infrastructure needs.

This approach also can cause security concerns as it might involve multiple vendors, different agents, different update schedules and configurations, along with unique licensing models and contract terms.

Common core with a la carte

While not as expensive as the monolithic approach, the cost of this application performance monitoring strategy is close to it. This approach requires payment for the engine, or core, that drives everything and additional purchases for the agents for the different environments as needed. This allows you to gradually build up the environment and achieve similar benefits of the monolithic approach with long-term spending.

This sounds ideal, but there are some downsides. The agents for the common core model are not cheap and could end up costing as much -- or more -- than the monolithic approach. Of course, the advantage is spreading out that cost over time, but maintenance costs still exist and quickly add up. This approach also might seem like it constantly requires additional agents or products, which can appear to management like the product is a money pit.

Any large-scale approach simply might not be as comprehensive as a dedicated tool. Pre-loading some agent purchases can help, along with negotiated vendor pricing to give management clarity and make a common core model a real standout approach.

How to choose

A big part of deciding on a performance monitoring tool is looking beyond what you need today to where you want it to be in a few years.

Dedicated tools might lack that big-picture ability, but the costs might be preferable if an organization will eventually move to the cloud and doesn't want to make a large monitoring investment. The a la carte option is surging in popularity as an ideal option, but requires a grasp of the pricing models and ability to afford everything the organization wants from the tool.

The positive takeaway is that all vendors will want to negotiate a favorable pricing; once an organization selects a monitoring tool, it is difficult to change. Pay attention to licensing details and monitor licensing and maintenance costs closely as they can quickly get out of hand if not kept in check.

Next Steps

Explore the 2022 application performance monitoring market

Dig Deeper on IT systems management and monitoring

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close