Andrés Ornelas, Web DevOps lead at Twitter, decided to go a step beyond software testing. He took a peek underneath the covers of Twitter's code in order to manage the risks associated with defects, and ultimately, to simplify testing. He found that by developing better techniques for analyzing its code, it could also improve, reuse and reduce the costs of adding new features.
Part of Twitter's challenge is that the company, and in particular its engineering team, has grown quickly. As engineering teams raced toward each new goal and individual developers have changed teams and functions along the way, they've left behind a fair amount of technical debt.
To clean up this mess, organizations can empower developers to refactor the code to be more efficient and better optimized for the current generation of services and applications that use them. But this is complicated and time-consuming, because it requires a lot of work to get value from the effort, Ornelas said.
Andr�s OrnelasWeb DevOps lead, Twitter
There are a lot of intangible benefits from code refactoring, such as increased productivity for new features and easier management. It is also easier to test. But Ornelas said it is challenging to communicate this to management, who might not see the benefits of spending two months of development to re-create existing features. The effort involved in code refactoring pulls resources from feature work that can create new value and opportunities for the business.
All this makes it hard to prioritize, estimate, and set goals around code refactoring because most of the benefits are intangible. To address this problem, Ornelas developed a tool called Histology to analyze different aspects of Twitter's code base in order to better communicate the characteristics of code risk. Histology looks at several metrics and calculates the risk of modifying each file. This makes it easier to focus their refactoring efforts on the code files with the highest risks.
Prioritizing the fixes
The first step in analyzing code is to gather metrics on each file in the code base. The second is to normalize this data and transform it into hazards, which are risks identified by the metrics. The final step is to aggregate the computed risks from each hazard.
One risk comes from overly large files. Large files should be broken up into multiple smaller ones when possible. Modifying these smaller files is less risky, said Ornelas, because there is less for a developer to keep in his head when working on them.
Other risks relate to the separation of the groups working on the same file. Ornelas said other research has shown that as code is touched by a larger number of groups, the risks can go up. "The more cohesive your organization is with the code base, the better the quality of code," he said.
If a lot of different teams are modifying a single file, it probably means something is wrong. It might be the case that the organization grew, but the code structure stayed the same. At this point, directors and executive teams can make informed decisions about how to reduce the organizational spread.
Another metric is churn, or the number of modifications on a piece of code in a given length of time. As the number of modifications rises, so does the risk. However, this metric is usually ignored when a file is being refactored because the changes are likely to reduce the amount of risk in this file.
Ornelas recognizes that the way these types of metrics relate to risk in different organizations varies. But once you have a tool in place for analyzing your code in this way, you can dial the weight of the different measures until you start to find patterns that correlate to things like defects in production or the time required to build new features that leverage existing code.
Develop an organizational coding style
A good starting point for establishing these standards within an organization is to look at the conventions used in the larger development community. One way of identifying popular conventions is to do an analysis for similar types of code on GitHub. "When the community agrees on something without being too organized, you can usually be sure it is for a reason," Overson explained.
But agreeing to coding style standards and actually following through are different things. Old habits can be hard to break. Some programmers might initially be reluctant to add punctuation where none is required. It can be burdensome to task a manager with yelling at developers for style variations.
In the beginning, it is important to go into your existing code base and do a deep dive about the conventions you want your team to use in writing code. A lot of the options for analyzing the different styles of syntax don't make sense until you see how they work. When this is done as a team, you can see how different options will affect the code that you want to exist.
After your team has agreed on a coding style, there may be instances where variations could prove useful. In these cases, the warnings can be turned off, but not before the developer has thought through how the value of the variation weighs against the cost of deviating from the agreed upon standard.
Automate the enforcement
- The implementation depth of functions (maxdepth)
- The maximum number of statements in a function (maxstatement)
- The maximum length of a line of code (maxlength)
- Cyclomatic complexity, which measures the number of paths that could be traversed as a program is executing code.
Your organization might be doing unit tests and function tests, but if it does not have an overall view into a variety of metrics relating to the code base, there could be problems. "If you are releasing code to the public, you should be taking the time to inspect what that code looks like," Overson stressed.