root cause analysis Top 12 application performance monitoring tools
X
Definition

synthetic monitoring

What is synthetic monitoring?

Synthetic monitoring is the use of software to simulate user interactions with a system. The data generated from the simulated transactions is then analyzed to evaluate how the system behaves. For example, synthetic monitoring could be used to determine whether a website achieves desired page load, response times and uptimes.

Synthetic monitoring solutions make it possible to assess how a system is likely to respond to user requests before actual users interact with it. Synthetic monitoring provides an opportunity to validate whether systems meet performance and availability requirements prior to being put into production. In this respect, synthetic monitoring enables proactive monitoring. In addition, synthetic monitoring is valuable for assessing a system's response to unusual or infrequent requests, which may not be represented in data collected from transactions real users initiate.

Synthetic monitoring is sometimes also called active monitoring because it involves actively simulating transactions.

Cisco Automated Session Testing flowchart
Cisco's Automated Session Testing is a synthetic monitoring tool integrated into the vendor's ThousandEyes internet intelligence platform. It tests the network paths of devices supporting Cisco Webex, Microsoft Teams and Zoom meetings and displays them in a flowchart.

How does synthetic monitoring work?

Typically, synthetic monitoring works by following these five steps:

  1. Developers or quality assurance (QA) engineers write scripts that issue requests to a website, application or other system.
  2. The scripts are executed to generate the desired simulated transactions.
  3. Monitoring software collects data about the transactions.
  4. The data collected by synthetic monitoring tools is analyzed to assess whether the system meets performance or availability requirements.
  5. If necessary, the monitoring system is updated or tweaked to improve web performance or other performance metrics. Then another round of synthetic tests is run to determine whether the system now meets performance requirements.

These steps involve running automated synthetic tests using scripted requests. It's possible to perform synthetic testing manually, too, by triggering transactions by hand. However, that approach is difficult to scale and it requires more effort to execute.

By writing scripts to kick off the synthetic testing process, developers and QA teams can make the testing process faster and more efficient. Scripted tests can also be rerun whenever an update is applied to a website or application. This makes it easy to test the system in a consistent way, gain confidence that the update did not introduce an availability or performance issue or get to the root cause of a problem.

screenshot of Datadog's monitoring tool
Datadog's monitoring and analytics tool includes a customizable dashboard with graphs created from multiple data sources.

Synthetic monitoring vs. passive or real user monitoring

Synthetic monitoring is not the only way to monitor a website or application. Developers and QA engineers can also perform what is known as passive or real user monitoring.

Under the latter technique, data is collected and analyzed based on requests actual users initiate. In other words, instead of measuring metrics like page load times and response rates based on simulated requests, engineers monitor this data from systems inside a production environment.

With both types of monitoring, the types of data analyzed, and the types of insights that engineers seek, are fundamentally the same. The main difference lies in how the data is generated -- whether it's based on simulated transactions or user transactions initiated by live, human end users.

Because of this difference, synthetic monitoring and real user monitoring are typically used for different purposes. Synthetic monitoring helps to test or evaluate a system before it is placed into production. Real user monitoring helps to identify issues -- like slow response times or errors -- that users may be experiencing once the system is live.

Types of synthetic monitoring

There are a variety of use cases for synthetic monitoring. The most common include:

  • Performance monitoring. Simulated transactions can be monitored to determine whether a system meets performance requirements, like completing a request within a given time frame.
  • Load testing. By generating a large volume of simulated requests, synthetic monitoring allows engineers to evaluate how a system behaves under heavy load. Load testing lets them know if a website or application is likely to crash due to a spike in user demand.
  • Transaction monitoring. If developers or QA engineers want to determine how a system handles a specific type of request -- such as one that involves a newly introduced feature that has not yet been deployed to production -- they can initiate and evaluate transactions that simulate that request.
  • Component monitoring. In distributed systems -- such as microservices applications -- synthetic monitoring can be useful for testing certain parts of the system, like a particular microservice, by directing requests at it and measuring the response.
  • Application programming interface (API) monitoring. APIs handle data requests between different systems or system components and endpoints. Synthetic API tests enable engineers to assess whether APIs manage requests as required. One example of where API monitoring may be useful is if developers want to determine whether a third-party API with which they are integrating their application behaves as required.

The list could go on. Synthetic monitoring can be used to test virtually any type of user transaction or request, for any purpose. If a real user can initiate a request, that request can also be monitored synthetically.

Synthetic monitoring benefits and challenges

There are several advantages to using synthetic monitoring. However, the technology also comes with its share of issues.

Benefits

There are three main benefits to using synthetic monitoring:

  1. Performance validation. The main benefit of synthetic monitoring is that it makes it possible to validate system performance and identify potential issues before they impact the actual end-user experience. Synthetic monitoring lets you monitor proactively and get ahead of performance and availability problems before they disrupt production applications.
  2. Baseline benchmarks. Synthetic monitoring also provides the benefit of helping teams establish a baseline for expected application behavior. By analyzing a series of simulated transactions, engineers can determine how the system should operate under normal conditions. Then, once it is in production, they can detect anomalies and troubleshoot by looking for patterns that deviate from the benchmarks.
  3. User-specific testing. A third benefit of synthetic monitoring is that it makes it possible to run tests from the perspective of specific users. For example, developers may want to evaluate how an application performs for users who require special accessibility features that are not important to other users. Or they might want to test the experience of users in a certain geographic region that is farther away from the data center hosting the application. Developers and QA engineers can determine which types of requests a given user is likely to make or they can initiate requests under certain conditions -- such as routing them in a way that reflects how they'd be routed to a specific geographic location. Then, they can run simulated tests to determine how the system behaves for that type of user and their business transactions.

Challenges

There are two main challenges of synthetic monitoring:

Benefits and challenges of synthetic monitoring Benefits
  1. Limited scope. Synthetic monitoring tests only transactions that engineers decide to simulate. As a result, it doesn't typically cover the full scope of request types that real users are likely to make. If a certain type of transaction triggers an issue, but that transaction is not simulated during synthetic testing, the problem may be missed until the application is in production.
  2. Increased system load. Synthetic monitoring increases the workload placed on a system because it adds to the total number of requests that a system has to handle. This is typically not a major problem, especially if synthetic tests are run against a testing version of a system rather than against a production system. But if teams run synthetic tests against a website or web application that is also handling real user requests at the same time, there is a risk that the additional load introduced by the tests will negatively impact the user digital experience of the real users.

Synthetic monitoring tools

Most modern application performance monitoring (APM) tools and platforms include synthetic monitoring within their feature and functionality sets, as well as integrations with enterprise software and monitoring capabilities. Vendors providing APM tools and platforms include Datadog, Dynatrace and New Relic.

It's also possible to build bespoke synthetic monitoring tools by writing scripts that initiate transactions, then monitoring the results. However, the synthetic monitoring process is more efficient when engineers use platforms that are designed to streamline the process of writing tests, deploying tests and analyzing the results.

Learn about all sorts of IT monitoring capabilities that could help your business in this comprehensive guide.

This was last updated in June 2022

Continue Reading About synthetic monitoring

Dig Deeper on Software testing tools and techniques

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close