Abraham Marin-Perez is an independent Java developer, speaker and Agile advocate. He is speaking at JavaOne 2016...
in San Francisco about keeping your continuous integration and continuous delivery, or CI/CD, pipeline as fast as possible. SearchSoftwareQuality caught up with Marin-Perez right before the conference began.
In your experience, is it common for companies to lose control of their CI/CD pipeline? Is there a way to set up the pipeline from the beginning to avoid this?
Abraham Marin-Perez: In a way, yes. They wouldn't call it 'losing control,' but I think that, although there are some really innovative companies that have some very sophisticated CI/CD pipelines, most companies are still at the point of simply trying to get a fully functional automated build. I mean, I have seen places where they don't even have a reproducible way to produce a deployable package -- everything is still a manual compendium of independently built modules that have to be put together on site ... you can imagine that amount of trial and error that this requires. For these companies, simply achieving automation, without worrying about how long the build takes, is already a huge win.
The problem with achieving a fast build is that this is a relatively new practice, and we haven't figured out yet a standardized or normalized process for it. There are some guidelines or practices, but when you look at different CI/CD pipelines in different places, you realize everything is quite customized. But, also, even if such a way was possible, I don't think it's a general concern right now: The first thing is making sure you have an automated build, then you can worry about making it work fast.
OK, so your pipeline isn't as fast as it should be. Can you briefly walk us through the diagnostic process?
Marin-Perez: The diagnostic process is actually pretty similar to any other performance-related task: The first thing you need to do is establish a way to reliably measure your CI/CD pipeline, so you can talk about actual numbers and not just impressions -- too often, people complain because they feel the build is slow, but they don't actually measure things. Measuring is already a task on itself, since a build will typically take different times, depending on what exactly has been changed, so one has to look at average time, maximum time, etc.
Once the team has some reliable measurements on how long builds take, the next step is establishing a threshold: How long should builds take? Here, again, there are many variables that can influence the decision; maybe you want to establish a maximum build time so as to make sure you can respond to critical bugs [quickly] enough, or maybe you just worry about an average build time so your developers aren't stuck waiting for the build. The decision might be tricky, but once you get it, then you just need to routinely compare your threshold with your actual build time: If you're below it, you're fine; if you're above it, your build has grown too slow and you need to take action.
Taking action is, on the other hand, something that is also akin to other performance-related tasks. If you have concluded your build is too slow, then, obviously, you have to change something to make it faster. But where do you begin? Here is where you need to make some more detailed analysis of your pipeline and check where time is being spent in each of the individual steps. This way, you can probably find some bottlenecks -- or at least the segments where more time is being spent. These will be the segments that, if modified, will yield the best results, so here is where you need to focus first.
Finally, once you have decided where you want to act, then you enter a cycle of making some modifications and remeasuring until you're satisfied with the result.
Interested in what else is going on at JavaOne 2016? Here's the session catalog.
What is so challenging about all of this? What causes most companies to stumble?
Marin-Perez: I think the main problem is that, although the general principle of measure-change-evaluate is well-understood, people haven't applied [the principle] to the realm of CI/CD pipelines yet, which means they don't really know what or how to measure.
What's the testing role here in keeping the CI/CD pipeline humming along? From what I'm hearing, it's changing dramatically... any insight you have in this area would be greatly appreciated.
Marin-Perez: This is actually a pretty interesting question. There is one thing that I have noticed in recent years, and it's that roles within teams have become more and more specialized: You have the tester, [who] obviously focuses only on testing; then, you have the developer, who only worries about code; then, the business analyst, the UX [user experience] designer, etc. Sometimes, developers are further split into subroles, so you have back-end and front-end developers, each with their specific areas of expertise. [These] kind of multidisciplinary teams are quite beneficial, although they have an important drawback: Since each individual is so focused on their particular set of well-defined responsibilities, when a new challenge comes up, it's unclear who is meant to address it ... which often results in no one addressing it.
The role of the tester within this kind of team is changing a lot. In some teams, testers do exploratory manual testing and worry about the whole user experience, while in some other teams, they write all the major, end-to-end automated testing -- without manual intervention. For the case when they write automated tests, they need to keep in sync with developers to make sure their end to end are balanced with the developers' unit and component tests, since very frequently overlaps appear and tests can be consolidated to reduce redundancy. Similarly, I believe the role of the tester could be expanded to coordinate with the DevOps engineer to look at the build pipeline and understand how testing is affecting the overall build performance, trying to find efficiencies when needed.
I honestly think the role of the tester is quite undervalued, and testers should integrate more and more with other responsibilities in the team. Too often, testers work in isolation -- relative -- to the rest of the team, and they simply communicate when they believe they have found a bug. In fact, I think other members of the team should collaborate in the testing effort to make sure the activity is better integrated within the whole.
CI/CD: Why you can't have one without the other
Look what's behind Agile and DevOps
The secrets of successful release management