Getting applications to work in development is usually a challenge. Once they do work, it is disappointing to discover
issues that bog down application performance in production systems. But -- scalability aside -- there has always been a feeling that if it worked in development, it should work in production. The Web has turned that notion on its head.
Nicholas Whitehead, a senior architect at a large payroll company, framed the issue: "The implication has been that you will not encounter any issues in production that you didn't encounter in development."
"But," he continued, "I think anyone that has done a Web application that has more than two moving parts knows that's not true."
The environment Whitehead works in is one with applications running on Java application servers calling to Oracle databases. The servers are often linking up with some kind of legacy back end where, he muses, "all the real work is done."
It is the type of complex environment, Whitehead said, where a Java server is used as a central business management component.
"The applications I work use a lot of middleware," he said. "They are either Web-facing or internal." Whitehead does not look at these as composite applications, but rather as "multiple possible points of failure."
He said that until recently, there had been a tendency on IT's part to assume that a simple set of monitoring tools was adequate -- tools that provided statistics on CPU usage, disk space consumed, and whether the host was up. In terms of what was monitored, the unspoken motto was that "anything else should have been vetted out prior to production." But, as Java Virtual Machines (JVMs) have come to ride atop operating systems on application servers, much more visibility has been required.
Gaining visibility via CA's Introscope
Whitehead's team uses CA Wily Introscope tools for application performance management. These can provide information on deployed systems in action. A special benefit has been insight into the action of JVMs on application servers, he said.
"The JVM was always a black box. The tools make the black box go away," Whitehead said.
He said the Introspect tools ease the tasks of monitoring of stats on behavior within the JVM. "Now we can locate root causes for slow downs," he said.
For example, the tools can show the effect of a given method on an EJB in real time and historically. The performance management software can show average elapsed time for a method, what was the volume of requests, how many exceptions the EJB was thrown, and so on. "You get rich stats," Whitehead said.
What are performance issues Whitehead must watch for?
For one, Java applications may be running well, but interaction with back ends can be a problem. Java applications calling to legacy hosts are sometimes impacted by WAN saturation. Regionally distributed back ends can incur latency.
Moreover, different back ends have different behaviors. There isn't always a conformance to a specification. If one server is on one side of the WAN and another is not, developers should have different expectation for the elapsed time required for the same call, he said.
What Introscope provides, Whitehead mentioned, is a way to instrument third-party Java libraries so the team can demarcate statistics by name of server. It can show results for one JVM that is in turn dealing with several mainframes.
What the tools do with gathered data is important, Whitehead said.
"Collecting performance data in itself is not that difficult. You can find free tools that allow you to collect that information," he said. "One thing 'Wily' does is give you a system that can absorb and report the information."