Your specific environment and tools are unfamiliar to me. I'll reply to a more generic version of this question which I see as, "We have a legacy application that we would like to build automation for. What are the advantages and disadvantages? And also, can current manual tests be used to build future automation tests?"
Building automation for a legacy application has the advantage of the user interface most likely being settled and unlikely to be updated -- at least not updated significantly. If the test automation tool you are using relies on the user interface to navigate tests, then this is a considerable advantage. The other advantage with a stable user interface is that you and your team can work on the automation intermittently as time permits versus the automation frenzy and rework that sometimes takes place on applications still heavily under development and/or frequent releases.
The disadvantage of automating a legacy application is that if the user interface uses custom controls or has other interesting application quirks; it can be harder to solicit help from a team who may have moved onto focus on more current efforts. It might also be the case that if the application is older and the technology it relies on is older; you may no longer have developers on staff that can assist with challenges that may come your way. In my experience, the hardest aspect of working on legacy applications is psychological and technical. The psychological factor is that once an application is old and is no longer a primary focus of a team, getting and using resources becomes difficult; people want to work on new applications. The technical factor is that people want to work with the most current technology they can (this is often, but not always, the case as some people prefer the comfort of technology they know). You may find as a test lead or manager working with a legacy application that much of your efforts will be centered around inspiring people to continue work on the old.
As to the second part of your question, manual tests are often used as the foundation to building an automated test. Instead of trying to automate manual tests, I think it's more advantageous to look at the automation tool you're working with and the application and look to automate tests that fall into the following areas:
- Tests that require extensive manual effort.
- Tests where the result of pass and fail isn't arbitrary.
- Tests that are best executed with volumes of data -- such as search testing or creating new items where the manual labor to execute the tests is considerable and the advantage of having run "many" tests with different data has a benefit.
- Tests that use the automation tool for what you discover the tool is best suited to. Every tool has its advantages -- use the tool to do what the tool is best at. This may or may not always match what you wish the tool could do.
Fearing rework to automation is an intelligent worry. Rework is time consuming. Planning automation where each script or component is a small slice that can be used by many scripts is a strategy that has been true for a long while and I would suspect may always be the case. For example, rather than write a test automation script where a user logs in, executes an action and then logs out -- these actions can be built in three separate automation scripts. There is nothing new about this strategy but that doesn't make the strategy less appealing. When you brainstorm with your team about building automation, I would suggest always thinking about how to build scripts that will be less susceptible to change and favor small component scripts whenever feasible.
Dig Deeper on Topics Archive
Related Q&A from Karen N. Johnson
User acceptance testing and system integration testing differ in one key way: the person who does the testing. Learn when to apply UAT vs. SIT. Continue Reading
There are so many resources out there about the ever-changing world of Web design and mobile testing, but to choose the most salient and insightful ... Continue Reading
In this expert response, consultant Karen Johnson describes strategies she uses for browser compatibility testing. Experience and knowledge of common... Continue Reading