After years of explaining why there is no universal or "one true" approach to testing application performance,...
I finally have a useful response for folks who ask questions like the following:
"Does anyone have performance testing methodology related to SDLC? Does it show the different phases?"
"Where can I find the standard process workflow for performance testing?"
I start with a response similar to this:
"Performance testing is a complex activity that cannot effectively be shaped into a 'one-size-fits-all.' or even a 'one-size-fits-most' approach. Projects, environments, business drivers, acceptance criteria, technologies, timelines, legal implications and available skills and tools simply make any notion of a common, universal approach unrealistic. That said, there are some activities that are part of nearly all project-level performance testing efforts. These activities may occur at different times, be called different things, have different degrees of focus, and be conducted either implicitly or explicitly. But when all is said and done, it is quite rare when a successful performance testing project does not involve at least a decision around those activities."
This response inevitably leads to the question: "What are those activities?" This question, at least in my opinion, has much more value. With this question, you can demonstrate that even though there is not a single, universal approach to performance testing, there are commonalities. Additionally, you can then engage the person asking the question in a discussion about how to effectively integrate those activities into the project he is asking about -- thus, in short order, guiding him through the creation of a customized approach to performance testing that gives his project a real chance of succeeding.
To help me answer the question "What are most common activities conducted during successful performance testing projects?" I use the mnemonic CCD IS EARI, which stands for the following guideword heuristics:
Let's consider each of these heuristics briefly in turn.
Context: To have a deliberately successful performance testing project, as opposed to an accidentally useful one, both the approach to testing performance and the testing itself must be relevant to the context of the project. The project context includes, but is not limited to, the overall vision or intent of the project, performance testing objectives, performance success criteria, the development life cycle, the project schedule, the project budget, the available tools and environments, the skill set of the performance tester and the team, the priority of detected performance concerns, and the business impact of deploying an application that performs poorly. Without an understanding of those items, performance testing is bound to focus on the items that the performance tester or test team assumes must be important, which frequently leads to wasted time, frustration and conflicts.
Criteria: Performance acceptance criteria include requirements, goals, targets, thresholds and objectives related to both the application's performance and the performance testing sub-project. While many of those items will undoubtedly change during the project life cycle, keeping up with them will help to ensure that performance testing stays in sync with the overall priorities of the project. If you are unfamiliar with this particular characterization of performance criteria, I've defined them below:
Performance requirements: Criteria that are absolutely non-negotiable due to contractual obligations, service-level agreements (SLA) or fixed business needs.
Performance goals: Criteria that are desired for product release but may be negotiable under certain circumstances. These are typically, but not necessarily, end-user focused
Performance testing objectives: These refer to data that is collected through the process of performance testing and that is anticipated to have value in determining or improving the quality of the product. However, these objectives are not necessarily quantitative or directly related to other stated performance criteria.
Performance targets: These are the desired values for resources of interest under a particular set of conditions, usually specified in terms of response times, throughput and resource utilization levels.
- Performance thresholds: These represent the maximum acceptable value for resources of interest, usually specified in terms of response times, throughput (transactions per second), and resource utilization levels.
Design: Like any other type of testing, to ensure that performance tests collect the data of interest, represent intended situations and yield both meaningful and valid results, tests must be well-designed. A significant component of performance test design is determining, designing and creating data associated with the natural variances of application users. Whether the design is completed well in advance of or in line with test execution is relevant only as it relates to the context of the project.
Install: This heuristic is actually short for "Install and Configure or Update Tools and the Load Generation Environment." Based on various performance criteria, project context and the design of your tests, you will need a variety of tools to generate load and collect data of interest. Additionally, to ensure that the test results and collected data represent what they are intended to represent, the load generation environment and associated tools must be validated to ensure that the act of data collection and/or load generation does not inadvertently skew the data or results.
Script: It is most likely that your test design will be implemented using a load generation tool that requires some degree of scripting. The act of scripting is, of course, extremely tool-specific. But no matter what tool you use, or how else you may choose to generate load, you will need to validate that, once implemented, the tests interact with the application in the manner intended by the test design, collect the intended data, and return meaningful and accurate data and results.
Execute: Test execution, as it relates to performance testing, is the activity most people envision as "clicking the go button and babysitting machines." The fact is that test execution involves continually validating the tests and test environment, running new tests and archiving all of the data associated with test execution.
Analyze: Analyzing test results and collected data, whether to determine requirement compliance, track trends, detect bottlenecks or evaluate the effectiveness of tuning efforts, is crucial to the success of a performance testing project.
Report: Reporting on the results and analysis is just as significant as the collection and analysis of the data. If the reports are not clear and intuitive for their intended audience, critical performance issues can go unresolved due to nothing more technical than failed communication.
Iterate: Iterating is virtually a given for any type of testing. Sometimes we iterate based on builds, defect resolutions or environment changes. The one part that isn't always obvious is that iteration applies to each of these activities as the project context, objectives and priorities have a habit of changing throughout the project life cycle.
Using these activities, or a similar set of activities named and grouped according to a team's established goals, process and terminology, it becomes relatively straightforward to arrange them into an approach that fits the existing project structure and then fill in any additional activities, tasks, approval gates or processes necessary to make the approach flow seamlessly within the project at hand.
Now if someone asks you what the "'one true"' approach to performance testing is, you can simply respond by saying "Organize CCD IS EARI into a flow that fits your project."