Problem solve Get help with specific problems with your technologies, process and projects.

Performance testing and experimental design

One-factor experimental design may have a place in performance testing, but testers should be careful with this approach. Testing expert Mike Kelly provides tips for those interested in experimental design.

What is the idea behind one-factor experimental design approach for performance testing? Is this an engineering approach? Where can I find more information on this?

For a look at designing experiments, I'm going to refer you to James Bach's blog post on the topic. You'll need to read the comments section for all the gory details. In that post (and subsequent comments), there are some good references (the first one in particular):

  • Three Romeos And A Juliet: An Early Brush With Design Of Experiments by Ravindra Khare
  • The Research Methods Knowledge Base has a lot of content, along with a specific posting on Experimental Design
  • A collection of articles by Curious Cat Management Improvement Connections
  • For each of the references, pay close attention to James' comments. His color commentary can often help clarify what you're looking at.

    Now, how does that apply to performance testing? To answer this, I turned to fellow SearchSoftwareQuality.com expert Scott Barber for some help. Scott's the performance testing book author; I just play one on TV. Here's what Scott had to say:

    OFAT is simply untenable in a multi-variable problem where the variables are interdependent.

    Consider changing the "one" factor of "increase load." I can't even imagine how one can do that without also adding test data, changing how tasks line up over time, changing the duration of the test, and so forth. It's not that one factor isn't valuable or desired; it's that we neither have that kind of time, nor the capability of understanding all of the interdependencies between variables well enough to make it useful. How many times would we have to run a test to be sure that we have statistically significant data before running a second test with one factor changed -- then run that a bunch of times, then change one more factor, and so on.

    I guess it depends on how one characterizes "factor" and what degree of change counts as "change." I'd say that it's far more effective to explore one system characteristic at a time (assuming you have a stable enough system that you understand well enough to isolate a characteristic) than one test design factor at a time.

    To build on that, I feel like often times when I'm doing performance testing, I try to test varying one factor at a time (or, more accurately, what Scott calls a characteristic). I often test using specific models that my tests attempt to replicate, so I'll select one factor in my model and change it in what I hope is a deterministic way. However, performance testing is often so complex that OFAT testing can be impossible. Changing one factor often necessitates changing another factor related to performance, even if we don't know it.

    Software testing resources:
    Testing for performance, part 1

    What to include in a performance test plan

    Why do we test for performance?

    So what does all that mean? Practically, I think it means that there is a place for thinking about testing one factor at a time when you think about your performance testing. I'm fairly pragmatic in my approach to performance testing, so if an idea from experimental design or observational study inspires you to think about your performance testing is a different and useful way, go with it.

    Dig Deeper on Topics Archive

    Start the conversation

    Send me notifications when other members comment.

    Please create a username to comment.