In part one of our interview with author Michael J. Sydor, we learned that the greatest potential for APM initiative failure is inadequate scope. In part two, we talk more about staffing issues. In his new book, APM best practices: Realizing Application Performance Management, Sydor says staffing has the second greatest potential for APM initiative failure and may be the most controversial. Find out why in this second part of a three-part series of interviews.
SSQ: You mention that staffing has the second greatest potential for APM initiative failure and is perhaps the most controversial. Why do you think that is?
Sydor: It is simply because frequently, all of the expertise ends up in a single individual. And personnel can be a very fluid situation, over a couple of years -- and then you have no one. This is often the case for small to medium-sized initiatives. The solution is easy -- always have at least two persons who can deliver the same skills. Organizationally, this is not always an easy goal to realize. The gap here is an understanding of the APM lifecycle, which parallels that of the application lifecycle.
Simply stated, for a small initiative, APM is just not a full-time job. Deploying an agent or transaction definition is a 10-20 minute affair. Developing a baseline is about an hour. Triage of an urgent problem is one to four hours. If you only have three to five applications, you may not have
It should be obvious that using a shared resource model is the answer and APM staffing scales very well. But using a shared APM resource model is often very difficult to implement. The existing silos get in the way. Questions about ownership of the technology, which business unit provides the staffing -- these get to be very difficult topics. And until the technology proves itself, you just cannot assume that a corporate standard would be established anyway.
So I lead with the same basic principles. Keep it small. Keep it short. Show value and plan to evolve as sponsorship increases. In a large corporation it is not unusual to find a dozen or more separate and unrelated APM initiatives. These will eventually condense into a few or single group responsible for APM. Every organization will go through this evolution -- at their own pace. The first ones to “show value” always rise to the top. You will find a number of examples in the book to help you understand the variations.
SSQ: Testing-as-a-Service and crowdsource models which take advantage of performance testing on the cloud are becoming more popular. What do you think or the pros and cons of these models vs. performance testing in-house?
Sydor: There is a definite opportunity for APM-as-a-service. I’ve identified Application Audit and Triage, in addition to QA, as services that would lend themselves to this model. There are some limitations in accessing the performance data remotely that need to be addressed depending on the nature of the relationship. The testing service is more often being explored because this is already a natural outsource point and many of the core APM processes will ensure that baseline and acceptance criteria are mutually defined and acknowledged. But I’m still just talking about it with prospects and cannot reference anyone who is operating in this model today.
I think the resistance is simply due to the tight control of performance data that has been the norm for many years now. Sharing data is just hard for many organizations to do. And it’s not just distributed data (application server) that you need to solve difficult problems. You need to know what the mainframe was doing. You need to know what the network was doing. You need to know what your trading partners are experiencing. It all comes down to visibility; you can’t manage what you can’t measure. And in this case, you can’t outsource what you are reluctant to share.
Moving the scope of an APM initiative from monitoring, to software quality, will also be a means to overcome the reluctance to share performance information. But you will need a larger level of sponsorship, which itself can be difficult. Extending the APM initiative to both QA and production will help you to create some opportunities to move the conversation onto software quality.
SSQ: What are the most important things you think organizations should look for in tools when they are making purchasing decisions for APM tools?
Sydor: I try to keep the conversation away from features and functionality and focus instead on what you expect to do with the tools. The most important opportunity for this is during the definition of the pilot exercise, for which I devote a full chapter. You need to use the pilot to try out your process and your expectations with the tool. Does it make sense for a single project or can it be used elsewhere? It seems like a scope violation -- you’re not responsible for other potential users. But those other users are the ones that can help an APM initiative grow in size and importance. You need to confirm if the tool has a future, outside of your immediate needs, or if it is going to be a dead-end. In a sense, that’s exactly what your purchasing department is trying to do.
Features and functionality are a big distraction from exploring how, and ensuring that, your team can show value for the investment. Make sure you understand what services or capabilities you expect to deliver with the tool. And make that the goal of your pilot exercise.
Michael Sydor is an engineering services architect for CA Technologies. With more than 20 years in the mastery of high performance computing technology, he has significant experience identifying and documenting technology best practices, as well as designing programs for building, mentoring and operating successful client performance management teams.
APM Best Practices: Realizing Application Performance Management is available at many leading book retailers, including Apress, Amazon.com, Barnes & Noble, Borders, Powell's, Safari, and Springer, among others.