UX design exerts greater impact on application development

Sergey Nivens - Fotolia


How to get started with synthetic monitoring

Expert Justin Rohrman says passive monitoring might not be enough to create the best user experience and that synthetic monitoring is the way to go.

Passive monitoring teaches development teams to follow what's happening in production right now. Every day, customers log into a device and search for a product, more than likely using a web browser. Each time they enter data into a field, information is collected. You can see where customers are located, what browser they're using, what operating system that browser lives on and, more importantly, whether everything is working properly.

Monitoring is a tool to help developers discover problems in production before the customer has one and needs to pick up the phone. The next step is testing in production, and that starts with synthetic monitoring.

Synthetic monitoring in action

Monitoring tools typically watch production environments to discover anything out of the ordinary. That can help reduce what's called defect exposure, or the amount of time a person is exposed to a problem. Discovering a problem seconds after it has occurred gives development teams options: flip the feature flag off, so after the next browser refresh, the broken feature disappears; roll the release back to the previous version; or send out a hotfix.

Getting users unblocked is the name of the game. But monitoring works in a haphazard way. The information you see only includes what the users are currently doing. If you want something more systematic, then synthetic monitoring is the way to go. Teams build synthetic monitoring tests primarily in two ways: recording HTTP traffic from a web browser or directly calling a service.

Development teams who want to do synthetic monitoring have to make some decisions regarding staffing and tooling.

Recording HTTP traffic from a web browser requires a proxy tool such as Fiddler. Turn on your proxy tool and configure it to look for a specific port. After syncing your web browser with the proxy, you might navigate to Amazon.com, where a customer is searching for Harry Potter and the Prisoner of Azkaban, clicking "add to cart" and completing the checkout with a stored credit card. This full round-trip scenario gives the development group a wide breadth of information about a customer's authentication, search, shopping cart, checkout and stored purchase methods. After the calls show up in Fiddler, copy and paste them into your monitoring tool. After editing some data for authentication, book IDs, credit card selection and so on, you'll have a runnable test. That's the route people who don't like to code will often take.

The other option is to directly call services, which is usually more code-intensive. You could either design your tests within your monitoring tool or write them in a framework like Ruby Airborne and have the monitoring tool kick these tests off. To build the same test, you would need to write something like this:

POST /login {"uname":"justin", "pwd":"123abc"} //store response token as a variable

POST /search {"token":authToken, "title":"Harry Potter and the Prisoner of Azkaban"} //store response book ID as a variable

POST /cart {"token":authToken, "ID":responseBookID}

POST /purchase {"token":authToken, "method": "savedMethod1"}

Each service call is dependent on something captured from the previous call. In practice, each call would make some assertions on the JSON returned.

Implementation questions

Development teams who want to do synthetic monitoring have to make some decisions regarding staffing and tooling.

For the past 30 years or so, software development teams have been structured according to specific roles that include developers, testers, database administrators and product managers. Implementing synthetic monitoring might require developers, testers and operations people to work collaboratively. Testers will come up with ideas for monitoring that will return useful information about whether data is persisting, the right HTTP statuses are being returned and if recent changes may have broken existing functionality in surprising ways. Developers will help write the code for these tests and check that code into version control. Operations staff will get these tests configured in a monitoring tool to run against the right environment and send alerts when things don't look right.

In more modern team structures, one or two developers -- or perhaps DevOps people with broad skill sets -- might handle all these tasks on their own. They'll move from developing a test to configuring the monitoring tool to setting alerts, rather than handing off the work to people in different roles.

Along with staffing, the other large consideration is tooling. Many of the popular production monitoring tools such as Dynatrace, AppDynamics and New Relic have synthetic monitoring built into their monitoring solution or a tool that pairs up with their application performance management product to perform synthetic monitoring. Fully featured, integrated development environments like Visual Studio have plug-ins available to handle some aspects of synthetic monitoring as well.

Synthetic monitoring doesn't replace but instead supplements passive monitoring by filling in the gaps. To begin, take a look at your team and their skill sets to see who can do this particular work, select the appropriate tool and build a few tests.

Next Steps

Monitoring your outbound traffic

Monitoring system essential features

Learn what is new with APM tools

Dig Deeper on Software performance management