Problem solve Get help with specific problems with your technologies, process and projects.

Help with development life cycle metrics

Learn to allocate time and resources in the software development life cycle. How much time should you spend on planning, analysis, design, development and deployment?

My question is about development life cycle metrics. Are there metrics for a life cycle example of, say, planning, analysis, design, development and deployment? Example: planning 10%, analysis 5%, design 20% for a total of 100%. And to break down the development cycle to include system testing. I need this information to come up with some basis to determine costs for system testing. Our system testing group uses LOE of 50% of development costs to determine system testing cost. Make sense? I think it is too high.
I honestly can't give you a direct answer to your questions. Test estimation is so specific to projects and teams that it'd be a disservice to you if I gave you a magic number -- without more specifics, it's just not possible.

I'm going to be agile about answering your first question -- the relative distribution of life cycle planning is specific to organizations, teams and projects. And what one person thinks is reasonable, another finds unreasonable. For instance, I've been on teams where two to three developers have planned for over a year when the actual implementation was less than eight months! Based on your approach (waterfall, agile, etc.) and your team's expertise, you need to arrive at appropriate metrics.

Your specific question really should get a similar answer ... what's the best ratio of development to test effort? It really depends. I have worked in high-availability, high-scalability commercial enterprise applications where our ratio was 1:1 during development, and included several months of testing after reaching code-complete. I've also worked Web applications where the development ratio was more like 1:6 (six dev hours for every test hour). I have examples from both situations where we shipped high-quality software, on time, with no follow-up release required. For this reason, I tend to shy away from specific ratios.

So how can you reduce ratios? With the implementation of shorter turn-around cycles (i.e., give daily drops to testing, rather than bundling changes into weekly or biweekly drops), your ratios can be lower because your test organization can test right alongside your developers. Also, introducing effective automated testing solutions can help with that ratio. Effective means using the right tools for the job -- if you work on projects that expect to have multiple releases over time, you can spend time developing deep automated test suites. If your projects are more one-time things, keep it very light, automating the simplest use cases and high-priority tests.

Don't just focus on the test side of things. I'm assuming the overall goal here is to reduce your engineering budget. Test should not be the single point of focus on that effort. By bringing in qualified, skilled developers you can cut the time needed to build product. Also, many teams that have moved to agile have seen benefits in terms of reduced input requirements for building software. Another key is to drive quality upstream -- make your developers own quality. Ensure appropriate unit testing, implement code reviews, and hold developers accountable for creating solid code.

There are a number of other factors at play in arriving at the correct system test ratio. For instance, how complex is your system? I worked on a retail management implementation project which started out with more than 100 legacy dependencies -- developers only had to write code to tie up the system, but test was responsible for ensuring each legacy system was able to consume the data delivered by the implementation. In that case, system testing was, at times, two times the level of effort of development. How experienced are your testers? If you are using developers with five to 10 years of experience and yet bringing in a team of testers with one year or less of experience, you're going to have to allocate a lot of time to help your inexperienced test team.

A final point to consider is your team's history. What have the historic ratios been, and what kind of quality have you produced? If your team releases and then has to deal with a number of escalations, your ratio is probably a little development-heavy, and you should consider adding testers. If escaped defects are low, you might have "tolerance" to reduce the testing effort somewhat.

If your organization is really concerned about getting the right number here, it might pay to reach out to a testing consultant. In the course of a few days, they should be able to sit down with you, go over the factors at play and your team's history. With that input, they can help you arrive at a good metric, and probably leave you with some tools to recognize when that metric can change.

Dig Deeper on Topics Archive

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.