This article is more than 1 year old

Just an FYI: That cool funky CI/CD tool you're installing now could be next year's legacy tech

Don’t let the lure of DevOps culture pull you into making rash choices on tooling

Sponsored Demand for services delivered as software – and for that software to be of ever-higher quality and produced on ever-shorter lead times – has seen the growth of the DevOps movement. When it comes to the trenches of IT, the practices of that movement are continuous integration (CI) and continuous delivery (CD) – practices that translate as tooling.

While CI and CD tools promise to reduce lead times and improve software quality, some tools in this ecosystem are starting to look a little long in the tooth. And the way they have been built means they can introduce something into your DevOps-driven, digital business we’re trying to leave behind – legacy.

What does that legacy look like in DevOps? It is an infrastructure that’s inflexible, contains silos, is difficult to use and maintain, and that – could – pose security issues. This new legacy will delay your build process and frustrate your digital dreams for years to come.

Culture first

A problem with the move to DevOps is some organisations can be so eager to embrace the movement they don’t properly consider how to implement DevOps. This is vital, because it has ramifications on the types of tools selected. It’s at this point that the devs should kick the tires and make sure the tools selected are going to help lay down the CI and CD infrastructures. Ops, meanwhile, must clearly understand the kinds of outcomes and outputs they want from these tools.

Once a consensus has been reached a proper selection process can begin. Now, before you say: “DevOps is a culture, not a tool”, let’s say that without the tools to manage and automate the process from build, through testing to deployment DevOps will bring no useful returns to the business. Don’t get me wrong - culture is important. The culture must be there before the tools, so your team can understand the important role they play in the business and for the team to be fully committed to DevOps. You don’t become as good as Amazon simply by using the same tools.

To the core

At its heart, DevOps depends upon moving away from the siloed ways of old to working using a pipeline approach. The siloed method had quantised jobs into significant release schedules – for example, quarterly. Yes, users got new features and bug-fixes but it was over a longer time period, new features might creep in to scope during that time, and you played catch up on security. The DevOps pipeline is a stream of updates that mean faster delivery, greater focus, and – hopefully - improved security as issues can be resolved in a more timely way.

Microservices, Docker and container orchestration engines like Kubernetes are the cornerstones of this world and this is where the choices you make on tooling may mean you stumble into legacy. Quite simply, not all DevOps tools have been designed to support microservices and containers out of the box. Which tools? Those that are older – among the first to the field – and/or that rely on a number of third-party plug-ins for additional functionality and to support “new” programming developments. Third-party plug-ins are good for adding bells and whistles to an application but relying on them for a critical part of your application is an unwise idea. They add dependencies – dependencies such as who supports them if they go wrong and who is responsible for the patches and upgrades. Some tools can have as many as 25 different plug-ins just for Docker and another 16 for Kubernetes, with each plug-in requiring further plug-ins of its own. This creates tens of further dependencies and complexity that makes for a brittle pipeline.

Testing also becomes an issue with heritage DevOps tools. In the age before containers, application testing was straightforward as there was only one service to test, but containers can mean any number of services to test - or roll-back. The issue runs across entire cloud services, which will have been extended using lots of plug-ins. Also in the mix are hundreds of supporting shell scripts.

In this world you will find your team spending more of its precious time supporting and maintaining the pipeline than deploying your application. This is one of the big issues with older tools - particularly those that employ a shared library structure and where, say, your test suite relies on one version of Java and your performance test suite needs another. What you have is a pipeline that won’t work because the software doesn’t co-exist and you have to reconcile the component versions.

I’d hinted at the issue of maintaining hundreds of shell scripts, but there is another, really big script-related issue that can pose serious development problems because it slows down the dev and test process. If the script fails the cause won’t always be obvious. That’s annoying but not critical with a small application but if you’re working on a large application that employs multiple pipelines, there can be hundreds of scripts - each of which must be written and maintained.

No free lunch

Tools that are free to download and use in DevOps are very tempting. They are certainly relatively easy to obtain compared to taking the paid route and once you’ve become a user going back can be difficult – or, frankly, unappealing if you like the tool. But while your choice may look free the costs can be hidden. They will manifest themselves in the need for your team to support the plug-ins and scripts and also to keep an eye on security patches and upgrades.

Upgrades are a key issue. Free may not be updated with the speed that you would expect or like for enterprise-class DevOps - one toolkit plugin took almost eight months to be fixed. Then, there’s the issue of plug-ins that are simply abandoned by their community owners. How long can you afford to wait for an update to a core tool in your enterprise DevOps tool chain? Indefinitely?

Free-to-download can also miss some key functionality found in paid offerings - in areas like reporting, security, scalability and compliance. If you work in a regulated sector that requires all of these or if you value any or all of these capabilities, paid tools that don’t rely on plug-ins is the best option. The lack of first-line support can also push up costs. Even good tools demand more than simple knowledge of features and simple use – they require training in terms of their use in a broader and interlinking team-based development environment. This requires thought around processes and it can take up to a year for teams to acquire the necessary confidence and skill. The costs entailed in this can’t be covered in free.

The art of DevOps

Software delivery has become the art of the possible. You want software quickly – no time or cost overruns, fewer bugs? Then DevOps is the answer. But not everything in the DevOps ecosystem delivers the possible. Indeed, some heritage tools can re-create the brittle and siloed development infrastructures of the past. The lesson is simple: don’t let the lure of DevOps culture pull you into making rash tool choices: weigh your options, or you may have taken a backwards step.

Sponsored by CircleCI.

More about

TIP US OFF

Send us news