WavebreakmediaMicro - Fotolia

Manage Learn to apply best practices and optimize your operations.

CTO advocates automated tests and continuous delivery

Automated testing and continuous development have become the driving force for CTO Andy Piper as Push Technology evolves its middleware platform.

Andy Piper is CTO at London-based Push Technology, which provides a Java-based middleware platform that helps U.K. developers working on applications that require a lot of messaging to a lot of users. Most of their clients publish statistical information for either financial entities (like stocks and bonds) or for online gambling. Piper has pushed his development team forward by espousing continuous delivery and automated testing.

When it comes to functional testing, Piper says everything has to be automated. "Manual tests are almost valueless," he said. Manual tests take too much time, and he said he needs tests that are quick, clear and repeatable. Those requirements naturally lead to automated tests. He also pointed out that there's practically zero user interface to a middleware platform, which removes a lot of the need for user experience testing.

Conducting performance tests is one area where Piper sees benefits to manual testing. He pointed to Gil Tene's research on latency at Azul and explained that for performance testing, he's not looking for average behavior; he's analyzing the effect of the outliers. He said that using tools like HdrHistogram and jHiccup and analyzing the results intuitively works better for his team than trying to set up reliable automated performance tests.

Piper said the important aspects of functional testing are maintaining quality and moving quickly. "It's about enabling the developers to make changes more confidently," he said, "so they work more efficiently." Automation is an important part of keeping up with the pace at which his developers are able to make changes and making sure they get the feedback they need as soon as possible. But managing a large battery of automated tests can be challenging. 

Some tests break bad

Most of the tests are very straightforward, according to Piper; they either pass or fail and the results are very accurate. However, some tests have a tendency to fail when they should pass -- or they fail for the wrong reason. He calls these tests Heisentests, after the Heisenberg uncertainty principle. These tests are "a bit of a bugbear" for Piper right now.

The Heisentests are tricky because they can't be trusted. A failed result may be accurate and require a developer to recheck the work and fix something. Or a failed result might mean that some detail is slightly different than expected and everything is actually working as it should. Developers don't appreciate being sent on a wild goose chase, especially when the supposed target is an imaginary flaw in their code.

The Heisentests are a problem that persists at Push Technology for the same reason that technical debt persists at many organizations: There aren't enough staff hours to fix the misfiring tests and meet project deadlines. However, the problem has reached a point where it must be addressed, and Piper is starting by having his team sort out the good tests from the bad. He said he has some testers working to sort the tests using JUnit categories. This way his developers will know which tests to question right away, and the team will know which tests to overhaul when they have time to pay down the technical debt.

Continuously delivering value

The search for efficiency has led Push Technology to adopt continuous delivery practices. Piper said they use Maven for source code and Jenkins for automation, which seems to be a popular combination. Right now, every change that his team commits is automatically merged into a new shippable version of the platform as soon as it passes the battery of automated tests.

Piper deliberately chose continuous delivery over continuous deployment because "enterprise clients want to peg everything to particular releases." It's important for enterprise developers to update middleware at their own pace and to be able to rely on the platform to remain stable.

Push Technology is working on a cloud release that will likely be aimed at midsize businesses. That version "will probably be more of a continuous deployment model," Piper said. He said that one of the challenges of moving to continuous development will be making sure all the code that goes into production is as hardened as it should be. "I love all my developers to death," he said, "But I'd still feel like I was being irresponsible if I don't keep a close eye on them."

Dig Deeper on Topics Archive

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

It sounds like they are finding out that test automation is easy, but good test automation is hard. For some reason, test automation is always thought of differently than other software development. It shouldn't be. It would benefit from many of the same approaches applied to development, including creating more but smaller tests that don’t break bad, unit tests for the tests to detect potential breakage, etc. One area where we've seen vast improvements in moving from automation of the test to automation of the test design with model-based test automation frameworks.
Thanks for the feedback, Mcorum. I know model-based test automation has been around for a while, but I haven't had much direct exposure to it yet. Having the computer automatically design effective tests sounds too good to be true. How does it work? What am I missing?
What you are missing here is that tests of any sort have to be thought through and designed properly to show what you want to find out. In order to have robust automated tests you have to think very carefully about what could go wrong and try to provoke some of this or be able to spot when something unusual is happening. If the test environment/data is not completely under control, you have to decide which of the many checks in the test are really necessary to show error behaviour and those where ranges of possible values are allowed becuase they are not fundamental to whether the software is behaving properly or not. Sometime too much checking of unimportant values just muddies the view. Only a Person can decide what is fundamental or not.
Thanks HMG, that's pretty much what I was thinking. But the literature on model-based test automation (which I have admittedly not looked deeply into) seems to suggest that test generation can be automated. From the bits and pieces I have read, it sounds like maybe there's a way to build all the necessary person decisions about what's fundamental into a model such that a large portion of individual the tests are automatically generated. Possibly even automatically designed. If there's a reliable way to calculate (in pure math that computers excel at) what tests will promote the best value, I'd be all for it. But I'm sort of reserved about it because what sounds too good to be true usually is.
Model based testing, is interesting, but let's be clear, that you cannot auto-generate tests.  You might autogenerate a single scenario with asserts that would effectively be checking, but it would only check whatever the generator was programmed to generate.

Automation may be one of the more misunderstood tools in the dev team's tool box today.  There are so many kinds, so many layers they can apply, and they can be misused very easily.
To me above appears Piper's views and wish list, not necessarily everything said above will be achieved.
Automating completely is really challenging and there may not be ROI in going through that route.