Software quality metrics are useful at any stage of application lifecycle management; requirements and design, development, testing and release management, and application maintenance. CIOs and senior managers have a surfeit of software quality metrics at each of these stages. Collecting, analyzing and acting on too many metrics may become time-consuming and impractical. The marginal benefits of additional metrics may not justify the incremental costs of all this collection and analysis.
Here are eight metrics that will allow CIOs and senior managers to keep a handle on all of the aspects of software quality end-to-end in the application lifecycle. They are organized under the four stages of ALM. In each stage of ALM, the first metric checks for efficiency, asking the question, “Are we doing things right?” Then, the second metric checks for effectiveness, asking the question, “Are we doing the right things?”
Requirements and design
1. Phase efficiency metric: This metric measures the efficiency with which the requirements of the end users are covered in various software release versions. Are these phases completed when they are planned to be? Are they on time or delayed? Are the most critical features needed by users and the most risky ones planned in the earlier releases? When risky features are attempted first, the chances of completing a software development project on time increases substantially.
2. Customer satisfaction: This metric measures the effectiveness of the application as it applies to functionality, ease of use, and whether it solves the end users’ business needs. An application may not address the problem users are trying to solve. If it does, it may not be easy to use. If all of these are true, customers may still not use it because of inadequate training. Customer satisfaction is the only way to make sure that the investment in the application is worth it.
3. Size and complexity metric: This is an efficiency metric that measures the true productivity of the software development units. This qualifies as a software quality metric since it is easy for a development group to tackle only easy, less risky business problems with proven technologies. Software and hardware technologies change at much faster rates than other, non-computer related technologies. This means that even at the risk of occasional failures, CIOs and senior managers need to ensure that software development projects tackle tough business problems with complex software. Failing to do this may affect the competitiveness of the whole organization if their competitors try tackle those problems successfully first.
4. Defect density: Software defects, if not identified and removed, do not go anywhere. They are just waiting to be identified by end users, leading to dissatisfaction and even abandonment. Users of mobile apps are notorious for being intolerant of software defects. The second or third time an app crashes, customers will just delete the app. Defect density can be kept at a reasonable level only by the use of best practices in selection of languages and tools, use of consistent, best practices in coding, and excellent documentation, inline in the code and externally. During software development, team members need to clearly understand each others’ code easily, especially in large, complex software projects. Defect density cannot be too low since it may indicate lurking, unidentified bugs. It cannot be too high since that indicates lack of best practices.
Testing and release management
5. Defect removal efficiency: Defects lurk in the software if not identified aggressively and removed during the test and release management phase of ALM. Defect removal efficiency involves the volume of defects identified and the speed with which they are addressed and removed.
6. Defect removal effectiveness: Removal of defects quite often involve redesign of parts of the software. Due to improper code check-in processes, defects creep back into future releases even if they are removed initially. Regression testing of older functionality and defects fixed already are just as critical when testing each release, as testing of new features.
7. Fix efficiency: Businesses evolve all the time and with these changes, or with increased deployment of software, additions or modifications may need to be made. Is the development group responsive enough to schedule these changes fast enough and get them out as new releases or software patches?
8. Fix effectiveness: The effectiveness of the maintenance phase of ALM is in the longevity of use of a software application. How long was the software application used before a complete redesign and redevelopment effort was needed? Business changes or technology changes may make new versions of software applications essential. If your competitors are moving rapidly from a brick and mortar store model of commerce to an online one, you may need a whole slew of new software applications. Changes in computing technology from mainframes, minicomputers, client-server/PCs, and now mobile devices necessitated reinvestment in applications. But within each technology cycle, how effective was your investment? Did the applications last as long as they could, resulting in good return on investment?
CIOs and senior managers have a large number of ALM software quality metrics to choose from. Given the demands on such executives’ time and attention, a carefully selected, small set of metrics may be adequate to ensure that software development efforts are efficient and effective. These metrics should span all the phases of a typical application management lifecycle to enable the software development organization to turn out software of the highest quality.
What software quality metrics does your organization track? Which do you find to be most useful? Let us know by sending an email to firstname.lastname@example.org.
How to measure software developer productivity
Performance testing metrics testers should use