Problem solve Get help with specific problems with your technologies, process and projects.

How can peak performance be achieved in non-functional testing during this Integration test phase?

Integration testing definitions vary widely even within a company it is important to know what type of testing you are considering before attempting to test, says expert Michael Kelly. In this response you will learn how to differentiate testing types as well as how to implement them effectively.

Do we have any tips to look out for the non functional testing during this Integration Testing? How best can performance be achieved at this level?

Your question doesn't indicate what you mean by integration testing, and it's been my experience that the term can mean different things to different people (even within the same company or on the same team). So for the purpose of my answer, I'm going to assume you simply mean testing where you're looking for disagreement between two or more parts of the system. Non-functional testing in that context (for me) often includes security, performance, testability, maintainability, and supportability.

When thinking about security at that level, it can be helpful to diagram out what information is moving between the various parts and asking yourself:

  • Does any of the data require authentication/authorization? Should it?
  • How is the information transported (SSL, S-HTTP, etc…)? Is it encrypted in any other fashion? Should it be?
  • Are there any checks for data integrity by any of the interaction parts of the system? Should there be?
  • Etc…

Asking these questions at this level (if they haven't been asked earlier) can save a lot of heartache down the road. I also find those things easier to test at the integration level because it's easier to leverage harnesses and tools to isolate traffic so you can pick things apart. It's also easier to model (which makes it easier for me to keep everything in my head).

With regards to performance, at this level I focus on baselining component-level or service-level performance for future tuning efforts. I work with the team to make sure we have the logging and instrumentation we need to get visibility into performance should we discover we have a problem. And then I might do some simple load/stress tests to see if there's any immediate tuning we might need to do - think low hanging fruit. Then later in the project, when we start to see system-level performance issues, we have a set of performance profiles and logging to fall back on to better understand where our issues might be.

If you didn't notice, for both security and performance I already talked about testability. It's at the integration level that testability becomes a big deal for me. That's mostly because that's where I get involved, I'm sure if I were a programmer I'd care about testability sooner. That said, I'm going to be looking for stubs and harnesses that allow me to isolate components and services as needed. I'm going to be looking for tools that allow me to increase and decrease logging levels. I'm going to try to get visibility into system state, the data being used, and the runtime environment.

Finally, while you're performing your integration testing, keep and eye open for maintainability/supportability issues. Do you see hardcoded values? Should they be? Do you see single points of failure? Has anyone raised that issue already and is there a way to mitigate it? The idea here is that you're simply serving as a sounding board for what might be an issue down the road.

Hope those tips help.

Dig Deeper on Topics Archive

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.