By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
When dealing with performance requirements you need to look at a bigger picture. Performance requirements are only one part of the larger discipline of "performance engineering" that spans business, development and operations organizations.
In addition to expanding the scope to consider these three constituents, you need to also consider how the situation changes over time. In other words, you need to have a lifecycle perspective when defining performance requirements. Only by expanding your performance requirements focus in these ways can you hope to produce systems that are truly aligned with the business need.
When you consider system performance more broadly in this way, you'll discover the source of performance problems fall into one of three categories.
- The performance requirements from the business were deficient
- Development didn't implement them correctly
- Operations didn't deploy and/or configure them properly
It's also not uncommon to see some combination of those as the cause.
Like all requirements, performance requirements are really a statement of how we intend to change the current state of affairs. In today's world especially, these "intentions" need to be supported by a business case that articulates the return on investment expected if we proceed with implementing these changes.
Therefore, performance requirements need to begin with the business and be based on the fundamental relationship of business value being the ratio of business benefit to cost. Using a lifecycle perspective, cost has two dominant contributors: development cost and operational cost. For most organizations today the cost of application performance is largely borne by operations. When application performance falls below what is needed in production, the first solution is typically to purchase more computing resources -- additional memory, bigger and more powerful servers, faster network devices, and so on.
Instead of this reactionary approach to the problem, more mature organizations recognize that a "performance investment" in the development side, earlier in the application's lifecycle, typically results in a much lower overall cost to achieve the same business benefit.
Performance requirements will essentially originate either from the business or from operations. Those that originate from the business are to take advantage of some business opportunity or objective, for example, "We have goal to improve sales by 17% in our retail cleaning products."
- For this goal, performance factors need to be determined. (Example: Need to increase productivity of the Internet order group)
Quality performance doesn't happen accidentally Quality application performance cannot be achieved unless you specify it. Using software performance requirements, you can address important user concerns for efficiency, interoperability, robustness and testability.
>> Read the story
- For these performance factors, specific performance metrics should then be identified. (Example: Need to decrease the average time to place an online order)
- One then needs to determine the components of the metrics. (Example: Need to reduce the average time per screen in the online order process).
These factors, metrics and components need to be quantified, which then drives the specific performance requirements. In addition to the above, all stakeholders who could be impacted need to be identified to ensure the performance requirements that result consider all their needs and are optimal for the business as a whole. For example, the affected system(s) may be used by people in a different department, impacting their processes as well.
Performance requirements that originate from operations are typically concerned with cost reduction, time savings, resource conservation, efficiency improvement or some specific performance deficiency. When faced with these gaps in needed performance, there really are only three ways to address them:
- Make unplanned upgrades or make expensive resource purchases to bring performance up to acceptable levels
- Have the business adjust and make compromises to accommodate the poorly performing application
- Make software enhancements to improve the performance (i.e. performance requirements are created)
Performance requirements are different in some very key respects from, say, functional requirements. Perhaps more than any other requirement types, performance requirements demand the collaboration and alignment of business, operations and development. Here are a few reasons why they are different:
The systems/deployment perspective
Performance is all about the users' (or an external system's) perception of how long it takes your system to accomplish a service request or task. This duration depends on the combination of software running on a specific set of hardware. So, the first difference is that unlike with functional requirements, the hardware environment plays a prominent role in establishing or achieving performance requirements. Assumptions and constraints about the hardware configurations, including the processor platform, operating system, virtual machines, memory and network (LAN or WAN) need to be clearly identified.
As with all requirements, how you intend to test them needs to be understood up front. With performance requirements, this means you need to consider how you will test from a systems perspective. To be done properly, this clearly requires collaboration with operations.
A lifecycle perspective
In addition to considering the business and operations during the specification of performance requirements, we as an industry need to do a better job considering performance requirements from a lifecycle perspective. We expect the application to have a reasonable life-span, and therefore need to anticipate and consider the future conditions in which the application will be expected to perform.
In general, we expect businesses to grow and prosper and, as a result, workloads to increase. We need to project these as best we can to come up with target performance levels that we want to design into the product. This doesn't mean that we necessarily need to build a system to perform at these levels today, but perhaps strike a balance. An example of this is building to performance targets we expect at the system's "half-life" but making design accommodations to easily support future performance enhancements. Since no one can predict the future, this "hedge your bets" approach can be a wise compromise. To be done properly, this perspective clearly requires collaboration with business.
The workload/time perspective
Performance deals with interactions with the application over time. Because there are so many factors that affect performance, and because these change over time, most requirements have to be expressed statistically. That means the best that can be done is to assure the business that the application will deliver the needed performance a certain percentage of the time, under certain conditions.
Example: "The response time of the system for transaction X will be less than one second 95% of the time, given the assumptions of section Y." This is the style in which performance requirements are typically written and tested. Later, during operations, statistics can be collected (via instrumentation) to ensure the system is performing as expected. To be done properly, this perspective clearly requires collaboration with both business and operations.
For many who continue to search for the next new technology that will solve their performance issues, a good part of the solution is under their nose. Getting back to basics when assessing performance requirements quality can go a long way.
There are various taxonomies out there for assessing "qualities" of requirements from standards organizations like the IEEE. One very straightforward and practical approach is SMART [Keepense et al], which states that any system requirement should be Specific, Measurable, Attainable, Realizable and Traceable. Applying one of these basic approaches with rigor, only moving forward with performance requirements that get a "passing grade," will often necessitate crossing the barriers between development, business and operations -- a first step to gaining that elusive alignment between the groups. Then performance requirements can begin to fulfill their purpose -– to guide development in producing systems that, when deployed by operations, allow business to meet or exceed its objectives.