Why code quality matters

Poor code quality is a disaster waiting to happen. For example, making changes to bad code can result in broken code. Kevlin Henney explains the importance of catching problems in code at the source so that they don't manifest into large problems that are difficult and costly to repair.

Kevlin Henney, consultant, software architecture, development process
Kevlin Henney

Code quality is a curiously slippery quality. It is often talked about in theory and ignored in practice. It is considered to be important in the long term but optional when a deadline is looming. It has no direct presence in a schedule, but it affects the schedule. There are syntax-centered, runtime robustness, and broader design-related definitions of it. So, what is code quality and why does it matter?

Of code and software
Although obviously connected, there is a subtle difference between software and the code behind it. Software describes the artifact that is delivered to and experienced by the user; code is the detailed and formal description of the software. Of course, errors in code lead to defects in software, but there is more to code quality than bug density per KLOC or the visual appeal of a software application.

The perceived quality of software relates to the need it fulfills, the convenience that it offers in doing so, the defects encountered, the effectiveness and efficiency of the user experience, and so on. The perceived quality of the code can be assessed in similar ways: What need does the code fulfill? What convenience does it offer? Is it itself convenient and effective to use? Is it right? Is it appropriate and efficient in its use of resources?

Inhabiting code
My previous article on the role of architecture in agile development noted that a poor architecture resists change by making change difficult and expensive. This resistance manifests itself in code. Software architecture is not simply a hand-waving, big-picture view; software architecture informs the code and is ultimately expressed in code. Code and architecture do not (and cannot) diverge; but actual code versus intended architecture can (and often do) diverge.

Just as architecture for buildings characterizes what they are like to live in and work in, we can say the same for software architecture, as noted by Richard Gabriel in Patterns of Software.

Habitability is the characteristic of source code that enables programmers, coders, bug-fixers, and people coming to the code later in its life to understand its construction and intentions and to change it comfortably and confidently. [...] Habitability makes a place livable, like home. And this is what we want in software — that developers feel at home, can place their hands on any item without having to think deeply about where it is.

Developers spend a lot of time "living"' and "working"' in code, so it makes sense that it should in some sense be habitable. Of course, this does not mean that source code should be padded with cushions and ornamental knick knacks such as overcommenting and labored logic. But it does mean there should be some common sense of arrangement and a shared vision. Without such order and clarity, a code base can all too easily become brittle, cluttered, curiously inconsistent, and either overly simplistic or gratuitously complicated (or a patchwork of both). This can lead to the loss of both the big picture and the small picture, which in turn leads to a corresponding loss of development velocity (and therefore an increase in development cost).

If the code is known to be problematic and a common source of defects, it is clear that just fixing coding errors one by one is not fixing the root cause.

From urban decay to clean kitchens
Code quality can be considered a reflection of the process and culture that create it. Without the right kind of attitude and support — technical, managerial, social — the code base is likely to decay over time. One approach to postponing the onset of such entropy is to avoid getting into big problems in the first place by continually attending to the smaller ones. If big problems grow from medium-sized problems, which in turn grow from small problems, catching them at source makes sense. This stop-the-line approach to code quality has been popularized by the Pragmatic Programmers as "don't live with broken windows".

Refactoring in response to broken windows is a tactical technique with strategic implications. Refactoring is not simply another way of saying "changing code," although some developers devalue the term by using it this way. Martin Fowler offers the following more precise dictionary-style definitions in Refactoring:

Refactoring (noun): a change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior.

Refactor (verb): to restructure software by applying a series of refactorings without changing the observable behavior of the software.

The one clarification to this otherwise comprehensive definition is that the '"observable behavior" in question is functional rather than operational. Functional behavior concerns interactions and resulting values when the code runs, whereas operational behavior concerns the properties of how the code runs — performance, memory usage, etc. A refactoring may affect operational behavior, but it does not affect functional behavior.

As an aside, Isabella Beeton's summary of responding to entropy creep with amortized work predates modern agile development by a few years:

A dirty kitchen is a disgrace to all concerned. Good cookery cannot exist without absolute cleanliness. It takes no longer to keep a kitchen clean and orderly than untidy and dirty, for the time that is spent in keeping it in good order is saved when culinary operations are going on and everything is clean and in its place. Personal cleanliness is most necessary, particularly with regard to the hands.

Can we fix it?
The antithesis of not living with broken windows is perhaps best summed up in the somewhat clichéd and frequently misapplied "if it ain't broke, don't fix it" school of thought. In theory that seems like practical advice, but in practice it is normally an exercise in denial and often an excuse for complacency. There are a number of reasons this maxim is misapplied, not least of which is the catchy but poor choice of words. What exactly do we mean by broken? And what is and is not involved in fixing something?

In the metaphor of broken windows, broken is taken to refer to some aspect of environmental quality. However, in the context of "if it ain't broke, don't fix it" the sense of broken is normally narrowed to refer only to functional behavior. In other words, if the code has no errors, then don't fix it. Of course, the implication is that if there are no errors, there is actually nothing to fix. But what is the scope of fix? If the code is known to be problematic and a common source of defects, it is clear that just fixing coding errors one by one is not fixing the root cause, which may in fact be organizational rather than technical. Indeed, with a blinkered "if it ain't broke, don't fix it" attitude, lots of uncoordinated minor local fixes are probably making the quality problem worse, not better.

And what of other changes to code? Code that is muddled — cluttered with duplication, redundant constructs, commented-out code, long-winded logic, battalions of special cases, etc. — is a money pit for change. If such code currently works (in a functional sense), that is likely to be a brittle state of affairs, a fragile equilibrium all too easily broken by even seemingly minor changes. In one sense the code is already broken; the change simply serves to expose the breakage.

If play-it-safe conservatism (or not-on-my-watch self-preservation, for the more cynically minded) is what we're after, the wording we want here is not "if it ain't broke, don't fix it," but "if ain't broke, don't touch it," which entails not making any change of any kind — even for new functionality. For functional but otherwise problematic code, the only guaranteed way to preserve the status quo is to preserve the status quo.

Of course, such a constraint may be somewhat limiting, even if it is more self-consistent than the "don't do anything that might break the code... oh, except for adding new features and fixing a few bugs (not that the code was broken, of course)" approach. There is no simple refactoring menu option that will fix just the quality problems across a large base of poor code, switching it from the pejorative to the enriching sense of the word legacy at the push of a button. To get out of that hole you have to first stop digging. Climbing out requires a more complex and long-term mix of technical and social skill.

On the other hand, if you want to keep your options open and steer clear of such challenges, the obvious conclusion and take-home lesson is to avoid the problem in the first place. Sure, you can wait until you're stuck in a traffic jam before you consider alternative routes, but by that time it's too late to avoid the jam — all of your efforts are focused on trying to get out of it. You're better off keeping your eyes on the road and your ears on reports of the route ahead.

About the author: Kevlin Henney is an independent consultant and trainer based in the UK. His work focuses on software architecture, patterns, development process and programming languages. He is a coauthor of A Pattern Language for Distributed Computing and On Patterns and Pattern Languages, two recent volumes in the Pattern-Oriented Software Architecture series. You may contact him at [email protected].

Dig Deeper on Topics Archive