Java developer groups often nurture reputations as technology seers that work on the cutting edge -- creating new applications. As with others before them, the dirty little secret is that they have to spend a large amount of time maintaining existing applications.
Getting to the right problem fast -- focusing first on the typical largest contributors to bottlenecks and sitting down with cross-discipline teams and reviewing the feedback from software performance monitors -- is now part of the application life cycle management process.
By some industry analyst estimates, developers spend a minority of their time designing and coding. How do they spend the bulk of their time? Resolving application problems.
The task has become more complex as Java -- like .NET -- has become a platform for integrating important composite applications and Web services. Trouble spots in these hybrid arrays can be hard for operations staff to pinpoint, and developers versed in the applications are commonly called in to do maintenance.
This may make for good team building, but, most people agree, the real job of the developer should be development.
A consensus holds that the maintenance task can be reduced by using performance management software tools throughout the software development life cycle (SDLC) -- starting in development, including pre-production and production testing, and spanning the actual applications in production.
"Whether you like it or not, you are going to support the application in production," said Bernd Harzog, CEO for analyst group APM Experts. "It is a bitter pill to swallow for the team."
"If it is an important application, the IT folks are not going to be able to support the application in production. You will need to create a team to support the application in production, or the problems in production will upset your ability to build the next release. And that is bad," he said.
Performance management point-guard
All application performance management starts with a view of the operating system and how it utilizes the operating system. With Java, a layer -- the Java virtual machine (JVM) -- sits atop the OS, and for many high-leverage applications the raw JVM must be included in a performance read-out. Above the JVM is business logic and data-oriented middleware. Here, Java code, methods and objects must be measured for performance issues.
Tools are available so that potential Web loads can be simulated or synthesized. Once an application, which typically hits on a database, is built, agents may be placed at critical transaction paths to measure production system performance with an eye toward the client experience.
The Java application server is typically the hub of activity.
"Things are usually [architected] in hub-and-spoke architecture, with the application logic and Java -- or .NET server -- at the center," said Jeff Cobb, senior vice president of product strategy at the Wily Division of CA. "That is the Java server in the point guard of the application."
The applications are very complicated," Cobb added. "If you can get a view into the health of [the server], you have a lot of information."
In earlier times, IT team members knew whom to go to when something broke. Now, said Cobb, there is more dependence on developers who understand how things are fit together.
Like other tool makers, one major goal of CA's/Wily tools is to do initial pinpoints of problem areas so that developers do not spend an inordinate amount of time searching for the problem.
It is bad for the developers and the lines of business they represent if "the developer is not developing," Cobb said.
Getting to the right problem fast
Quickly narrowing down the problem is key, said John Heintz, principal consultant at New Aspects of Software, which specializes in "high-leverage" technology architecture, deployment and operations issues. Among his colleagues at New Aspects are individuals involved with creating the open-source GlassBox monitor based on AspectJ. Use of GlassBox is often part of New Aspects' consulting practice.
GlassBox's creators adhered to the theory that quickly thinning out the problem set is the best way to solve a problem. Heintz says the consultancy implements the "80/20 Rule," trying to identify the 20% of problems that cause 80% of the issues.
The litany of such problems reads like a hall of fame of Java performance faults. "Glassbox uses [Aspect-Oriented Programming] to identify problems such as too many database calls or a remote call that was too long. It identifies JDBC queries that are too slow. It identifies a slow database call or too many fast ones," Heintz said.
Heintz added, "It measures the main things that are likely to be a problem. It takes away complexity, giving you 'a heuristic best guess.' " But he does remark that tools that measure for more than just the most-likely problems are important, too. Sometimes the root cause of your application problem is not a low-hanging fruit in the "most-likely" category. Probe, profile, monitor and agent makers have products available to help.
According to several "viewers", it is vital that problems are handed over to the right people as quickly as possible. A key, Heintz said, is "to be able to identify who the people are that can solve the problem."
To that end, Heintz suggests considering performance management from another point of view. He refers to "Lean" process principles, popularized by car maker Toyota, when considering performance management planning.
What is an important measure? It may not be about how your systems work, but about how your organization responds to performance problems. For management, Heintz surmises, it is important to realize that the important thing is to measure total cycle time for solving a problem.
Agile processes can help avoid performance "finger pointing." "If development uses an agile format, they are likely interacting with QA in a much more diplomatic way than I have seen in the past, because QA has to test their code on a much more frequent basis," said Derrek Seif, product manager for JProbe at Quest.
Java is entering a new era in terms of performance. The language itself takes better care of some issues that contribute to performance problems, notably garbage handling.
"From a performance perspective, many of the problems have been reasonably solved. But many developers may be working with false assumptions about what are good practices in Java," Heintz said.
Knowing when to create object pools that save the cost of object creation, and when not to, is such an area.
"With Java, there are a lot of performance problems that have been solved. But developers need to re-educate themselves," Heintz said. And that requires measurement of performance.
Complexity and upside
The complexity of enterprise Java applications can seem daunting. But there is something of an upside. There is a complex array of performance management product vendors.
A fair number of best-of-breed Java management tool vendors have emerged. They focus on this area specifically. Several have been acquired by larger companies with broader management offerings. And some larger performance management companies that passed on acquisition have fielded their own Java-oriented extensions. Today, Java performance tools are offered by BMC Software Inc., CA, Compuware Corp., Hewlett-Packard, IBM Corp., Sun Microsystems Inc., Borland, ClearApp, dynaTrace Software, Quest and others. Open-source alternatives are fertile, too.
Harzog reminds that the developer community benefits from a vast array of available tools. "If you are writing for J2EE servers, the first thing is you are lucky, because, unlike other categories, there are lots of good tools to choose from."