You can approach software metrics as a complete art. As with everything, there is debate about what you should measure and what is useless. What we do agree on, however, is its importance: if you cannot measure it you cannot manage it, have proper discussions about it and surely you cannot make any decisions. In order to approach software metrics, you have to get information about what the current status of an aspect is, how fast you are going, and what the future would look like given the current status and your speed. Status, velocity and forecast.
For the SDLC (software development life cycle) the aspects you want to track at first are the requirements, software, test cases, defects, tasks, and the derived items -- money and time.Requirements:
Although we can argue about definitions, this is the unit in which the user community communicates what is desired. From this the chain of events the software development life cycle starts flowing. If you have some kind of workflow behind the treatment of requirements (new, under investigation, approved, to be planned, etc), status can be indicated by the amount of requirements in each workflow step. The time required to get from one step to the next would determine the speed. Special treatment to the category new requirements, or the amount of change request, indicates the stability of the requirements process, and of course all later steps.
The approved requirements are translated into software features, units in which the development takes place. The "now" is determined by the amount of features completed. The velocity is just the division of the completed features vs. the amount of time it took. If you look at planned vs. delivered features, you have an indication of how reliable your predictions are.
Amount of test cases performed is an indication for the status of the test activities. If you calculate this with the time spent, you have a fair enough velocity for this process. This doesn't say, of course, anything about the time still needed. For that you have to include the statuses of the test case: passed or failed, as these are also an indication of the quality of the software.
The amount of defects found per time spent testing is a possible metric for the speed in which defects manifest themselves. If this rate doesn't drop going towards a deadline, you have a major quality issue on your hands.
Tasks in general should be monitored, as they can indicate possible bottlenecks, or that team members are sitting idly. It is a way to measure your resources. As a basis use the to-do (task) lists of your team members. Workload is the amount of tasks in several categories (as of course not every task can be considered in the same league). Velocity is determined by the time it takes to finish a certain task for each category.
Time and money:
These are the ultimate metrics. In the end, the customer and management just want to know, "When will it be ready, and how much will it cost?" These have to be derived from the previous indicators, because velocities and predictions in those areas will determine how long resources are still working on certain issues.