BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
Well, first of all, can we admit that your boss hasn't thought this through completely?
What does it mean to have a fully automated software testing strategy?
Let's say you're doing a demo for a customer or senior executive. The software isn't in production yet; you are showing what you've done to get feedback for the next iteration. The vice president of finance asks what happens if you try to create an invoice that is past due the day you create it. It's a good question, essentially a test idea; the kind of thing no one thought of before. If the software works one way, it's fine; if not, it's a new feature request -- not really a bug. The person at the keyboard starts to try to answer the question.
Do you tell him to stop, that you need to create an automated test to answer that question? I certainly hope not.
There are plenty of test ideas like this -- things you think of in the moment to explore, especially when testing a new feature that is part of an existing system. Many of them aren't worth doing every time; you just want to try it once. Institutionalizing these into code, to run all the time, is an expensive and wasteful process. Your boss certainly doesn't mean that every little idea needs to be automated, does he?
Likewise, does your boss want to automate test design -- the development of test ideas? Does he have some sort of magical box that you can feed in requirements as word documents and pop out test conditions and expected results?
When most people say "test automation," they mean automated test execution and evaluation, and, perhaps, setup. That is, they want to be able to click a button, have all the pre-existing checks run, and get results. A 100% automated software testing strategy, I think, implies that the thumbs up should be sufficient to get to production without further research.
If it were me, I'd start with asking these questions to have your manager define what a 100% fully automated software testing strategy means.
But there's a more serious problem under the surface here. Test tooling is a means; it buys you something. It is not an end in itself. If it were me, I'd be asking what the final goal is. If the goal is to go from "finished new feature test" to "in production" in some small period of time (say an hour) you'll likely find there are other blockers; for example, the time to build and deploy already takes over an hour. Or the test environment may not be able to support multiple builds and deploys on multiple branches.
I suspect the boss means regression testing -- the period from "finished new feature test" to "ready to deploy." I'm not sure how long it is taking you now, but if it's over an hour, then you might suggest an intermediate goal on the way to the long-term goal, such as cutting the effort in half. If you do that, suddenly a host of new ideas open up, including finding engineering ways to reduce the failure rate, so fewer tests are needed for regression.
Overall my advice is simple: Take a step back. Breathe. Ask reasonable questions. Don't be a know-it-all, don't be a doormat, don't enable and don't (overly) obstruct. Work with the boss to define terms, to focus on end results, and then come up with the means. All of a sudden, you'll be a leader, and other people will start to notice.
Automating? Follow Lisa Crispin's advice on getting started
The best tools for the job
Manual versus not -- John Scarpino weighs in on when you should automate