My question is given this staffing structure, what would you recommend each department takes responsibility for when testing our custom applications? For example, what testing should be done by Applications Development staff, what testing should be done by Systems Engineers, by Client Support, by BPAs and finally by end users?
I am looking for a process that results in high quality for end users, including adequate response time and handling an appropriate system load, and a system that can be readily supported once it's in production.
Thank you for the great question. Unfortunately I've not done a lot of work in the manufacturing context (only one project to date). So to answer this question, I turned to Tate Stuntz. Tate is the owner of Nimble Consulting and Manager of Consulting Services at Wonderware Central. When talking with Tate, he was quick to point out a number of relevant factors to testing in the manufacturing context. With his input, I've tried to provide some suggestions for what you might think about as you work to understand the best staffing structure for your IT department.
Often times, there are fewer environments for testing. According to Stuntz, "The real problem in this area is that people do not have duplicate copies of their production equipment sitting around for testing purposes. That makes scheduling downtime in order to do an install and testing (often called 'commissioning') logistically difficult."
If this is a problem for you, there's a good chance that no matter how you structure your team, you'll have few opportunities to get your testers engaged (regardless of it they are application development staff, system engineers, client support, process analysts, or end users). My suggestion would be to build a cross functional team for testing during commissioning, but if that's not possible lean heavier towards the consumers of the product or the developers of the product as your testers. They should have the most insight into what the application does and/or should do.
Another factor to keep in mind is that just because a system starts working, doesn't mean it will stay working. "This is where I see most manufacturing people fall down," says Stuntz. "They think software works the same as hardware; once you hit it with a hammer and it starts working, it will stay working. Not true. In software everything is inter-related [...]. If they have built some means for testing their PLCs and network, it should be separate from how they test the various layers of their software."
If possible, you should work to test at different layers within the system and with different criteria in mind. Each of the groups you identified in your question (application development staff, system engineers, client support, process analysts, and end-users) will have a different take on what's important, how it can break, and there's also a good chance that they can engage in their testing at different times (perhaps without a fully-commissioned system). Testing from these different perspectives and at different layers within the system may help identify intermittent problems, problems that you might not catch unless the system was running for a long period of time, or perhaps even a problem you might never have discovered once the system was implemented.
Finally, you might have different testers for your real-time and transactional systems. "I think that special attention needs to be paid to the mismatch between real-time (manufacturing) systems and transactional (business) systems," warns Stuntz. "In a real-time application, you'll have a picture of a conveyor belt on the screen and you'll have maybe 20 fields on that screen. From second to second, the fields of that screen update with the real values that the machine sensors are sending to the PLC. Those data points might get saved every minute so you can generate a trend of those values over time. Due to timing issues on the sensors, the PLC, the network, and the RT database, you might have a situation where some of those data points don't get updated every 12th minute (or something like that). That may be bizarre, but it also may be okay. It doesn't necessarily change the value of the data in that context. On the other hand, in a transactional system it is clearly not okay to lose track of every 12th record out of the orders table."
Both Tate Stuntz and I agree that some overlap of testing responsibilities will probably be healthy. That doesn't mean you need one centralized team to do the testing, but it does mean you'll want to be good a building cross-functional test teams when needed. On many non-manufacturing projects I've worked on, we've had early involvement from support staff and end users. On projects where we engaged with those other stakeholders early, we were much more successful in coordinating our efforts and providing the best coverage we could across all teams.
Dig Deeper on Software Security Test Best Practices
Related Q&A from Mike Kelly
Every software tool is individually designed to meet various needs and requirements of projects, teams and project managers. Learn what tools experts... Continue Reading
There are multiple ways performance testing can be handled on an Agile team. An expert describes the benefits of various approaches. Continue Reading
Creating user acceptance tests out of basic software requirements documents can be a daunting task. Expert Mike Kelly points out logical approaches ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.