For a look at designing experiments, I'm going to refer you to James Bach's blog post on the topic. You'll need to read the comments section for all the gory details. In that post (and subsequent comments), there are some good references (the first one in particular):
For each of the references, pay close attention to James' comments. His color commentary can often help clarify what you're looking at.
Now, how does that apply to performance testing? To answer this, I turned to fellow SearchSoftwareQuality.com expert Scott Barber for some help. Scott's the performance testing book author; I just play one on TV. Here's what Scott had to say:
OFAT is simply untenable in a multi-variable problem where the variables are interdependent.
Consider changing the "one" factor of "increase load." I can't even imagine how one can do that without also adding test data, changing how tasks line up over time, changing the duration of the test, and so forth. It's not that one factor isn't valuable or desired; it's that we neither have that kind of time, nor the capability of understanding all of the interdependencies between variables well enough to make it useful. How many times would we have to run a test to be sure that we have statistically significant data before running a second test with one factor changed -- then run that a bunch of times, then change one more factor, and so on.
I guess it depends on how one characterizes "factor" and what degree of change counts as "change." I'd say that it's far more effective to explore one system characteristic at a time (assuming you have a stable enough system that you understand well enough to isolate a characteristic) than one test design factor at a time.
To build on that, I feel like often times when I'm doing performance testing, I try to test varying one factor at a time (or, more accurately, what Scott calls a characteristic). I often test using specific models that my tests attempt to replicate, so I'll select one factor in my model and change it in what I hope is a deterministic way. However, performance testing is often so complex that OFAT testing can be impossible. Changing one factor often necessitates changing another factor related to performance, even if we don't know it.
So what does all that mean? Practically, I think it means that there is a place for thinking about testing one factor at a time when you think about your performance testing. I'm fairly pragmatic in my approach to performance testing, so if an idea from experimental design or observational study inspires you to think about your performance testing is a different and useful way, go with it.
Dig Deeper on Topics Archive
Related Q&A from Mike Kelly
There are multiple ways performance testing can be handled on an Agile team. An expert describes the benefits of various approaches. Continue Reading
Every software tool is individually designed to meet various needs and requirements of projects, teams and project managers. Learn what tools experts... Continue Reading
Creating user acceptance tests out of basic software requirements documents can be a daunting task. Expert Mike Kelly points out logical approaches ... Continue Reading