Application performance management is everywhere at once. No other discipline so fully spans the gamut of the software life cycle. As such, it is often the meeting place for teams comprising development, QA and operations personnel.
Does this sound like a recipe for finger pointing? Of course, performance has been something of a blame game over the years. "It's a network problem!" "It worked on my machine!" "It's the database calls!" These old sayings still are heard.
But increasing use of automated testing, more event logging and monitoring, and better reporting have given enterprise teams something objective with which to work when it comes to slogging through performance problems. Teams less frequently "throw hardware at the problem," and more often, if they do, it is because that is a valid solution.
As distributed heterogeneous systems have flourished, the causes of poor performance have become more complicated. As well, they have become more difficult to diagnose and fix. In turn, tools for test and performance management have become more varied.
Solving performance problems has not gotten any easier as distributed Web systems, at least compared to their mainframe forbearers, are marked by many more "moving parts." The advent of the Web, which requires customers to take on the input role once assigned to "key operators" has caused even further disruption.
Performance, test tools available
Test tools that play a role in achieving optimal performance range from classic developer-oriented tools that simply ensure that software works -- unit testers, functional testers and regressions testers -- to load, Web traffic and stress testers that simulate traffic before, after and during the tense "smoke test" that decides if an application is going to be ready to go into production. In recent years, the tool chest has come to include software that traces a transaction from user click to database update -- and back -- and tools that break down each task kicked off by a user interaction.
Probes and profilers are used in the developer group to identify bottlenecks created by overused objects, incorrectly applied methods or extravagant database calls. When problems are found as applications are tested for production, developers work with system administrators to identify areas of their code that may be causing problems.
Monitoring application performance
In many cases, both before and after applications go into production, software agents are added to monitor activity -- tracking how system elements respond when the full system is exercised. Analysis and event correlation become key, and the software suites that report such activity can in themselves become complex -- and expensive.
System and system use patterns are always changing, so on occasion monitoring agents disclose emerging performance bottlenecks that sometimes require developers take another look at their original code practices. Several tools today attempt to roll up problem reports in such a way that issues can be assigned back more efficiently to the development team, if their work is the contributor to the bottleneck.
The list of vendors offering general or specialized application performance management software is extensive. It includes BMC Software Inc., CA, Compuware Corp., Hewlett-Packard, IBM Corp., Sun Microsystems Inc., AVIcode Inc., Borland, ClearApp, Cordiant Inc., dynaTrace Software, Empirix, and Quest Software Inc. In addition, open-source tools such as PustToTest TestMaker and GlassBox are now found in the application performance management world.
Some of the larger companies buttressed their application management portfolios in recent years via notable acquisitions: HP with its purchase of Mercury Interactive; CA with its buy of Wily Technology; or Borland with its purchase of Segue Software. The consolidating trend saw a reversal recently when security and storage giant Symantec Corp. spun off the application performance management software concern it acquired along with backup specialist Veritas.
The spin-off reformed Precise Software, based largely on the Java management software that Precise, which had become part of Veritas, created.
The spin-off of Precise is seen in some circles as a statement that the application development group's role in application performance remains crucial, perhaps even greater than the role of data center IT staff, the usual target for Symantec's backup and storage offerings.
Developer, architect performance areas
The people most versed in the application remain the developers, but they must work across organizational disciplines to measure performance. They are part of a larger group, but an important part.
This is in part due to the complexity of new, composite applications. "Today you can't tell the IT staff whom to go to when there is a production performance problem," said Jeff Cobb, senior vice president of product strategy for the Wily Division of CA. "There is more dependence on the developer who understands how things are supposed to fit together. But the developer doesn't work for IT."
Cobb indicated creating reports that quickly pinpoint developer performance areas of interest is a driving force behind CA's application performance management tool strategy.
Yet developers and architects have much to learn from production monitors placed on their creations. The places are many where things can go wrong in distributed applications centered on Java servers (perhaps including Web services or composite applications).
"It's like death by a thousand cuts," said Bernd Greifeneder, CEO of application performance diagnostics company dynaTrace. "Software architects very often don't know the mismatch between what they designed and how it behaves."
End-user experience as a performance indicator
There is a sea shift in application performance management today, indicated Bernd Harzog, CEO of the APM Experts consultancy. Mainframe-centric tools evolved into Java- and .NET-centric management tools, but the user experience is quickly becoming the true arbiter of performance.
"Most of what we call application performance management has been around for a long time. The tools look at how applications use resources, and they see misuse of a resource as an indicator of a problem," Harzog said.
He continued: "That was how application performance management was done on the mainframe. That's the way Java probes started. They answered questions like 'Is it out of threads?' or 'Is it using too much memory?' But the big recent trend is to focus much more on the end-user experience."
Since the focus on end-user experience is fairly new, Harzog said, every avenue of application performance management needs to make progress on the end-user experience front. The need may become more marked, he added, as virtualization assumes a greater role in distributed system and CPU clocks become harder to use as a gauge of performance.
To help to describe performance practices today, SearchSoftwareQuality.com has begun to engage a number of practitioners, experts and vendors in a dialogue on application performance management. Topics we plan to focus on in an ongoing series on application performance management include SOA performance, Ajax and rich Internet application (RIA) performance, .NET performance, and testing performance. The kick-off of the series is a look at Java server issues, highlighting how Java server developers work these days within the organization to create better applications.