Skip to main content

So you’re a modern software factory, embracing DevOps. You know that the first thing you need is a Continuous Delivery process. After all, you can’t consider yourself having an Agile DevOps practice without Continuous Delivery.  You set up a CI/CD process for your developers to get the resulting builds pushed automatically into SIT.  Automated tests are run and the build gets the green light (or not, as the case may be).

You had to install agents on each server within the SIT environment, but that wasn’t too onerous – there were only 10 machines. And you have monitoring setup to make sure the agents are always running. So there’s no problem, right?

Then one day, your boss comes to your desk. “I’ve been reading up about DevOps,” she says, “and we should be able to push to UAT if the code passes the automated tests in SIT, right? That should save us a load of time.”

You smile and say, “of course”. But now you have a problem. There are several UAT environments and the deployment could go to any of them. Now you have another 20 machines on which to install agents. Half are virtual and half are physical, but you get it done with the monitoring set up. Of course, the monitoring alerts you every time a virtual server shutdowns so you stop monitoring those servers. But half the time the agent doesn’t start properly when the virtual server is brought up – and you only find that out when the automated deployment fails.

Still, it’s not too bad. You have 30 deployment agents now, 20 of which are being monitored, and only occasionally will an automatic deployment into a virtualised UAT environment fail since the agent isn’t running. All in all, you can live with it.

Then you get called to another meeting.

The UAT team decided to move everything to the cloud. It’s more agile to just spin up a server to cope with the increased demand due to all the new builds that need testing. Of course, each virtualised cloud-based server is going to need an agent installing but you can cope with that, right?


You get the agents installed in each of the virtualised servers, writing PowerShell commands to load up the agent on startup. Of course, deployments into these environments cannot take place until the agent is up and running so that’s slowing the process down a bit, but it’s still fairly agile.

Then your deployment tool gets an upgrade. And that makes a change to all the agents necessary. How many do you have again? Where are they installed?

Still, you’re on top of things. You have a decent DevOps practice. All you need now is the final push to go from Continuous Delivery to Continuous Deployment. Once that’s done, you’ll have reached the top of the DevOps maturity model with automated deployments all the way into Production.

So you talk to Production Support, explaining that you want to install a piece of software on their mission-critical servers, which make changes to any piece of production software on the machine. Since it’s a black-box, you’ve no idea what protocol it communicates over or whether it’s secure. Ten minutes later, you leave the meeting with your ears still ringing.

You’ll never implement Continuous Deployment since you have no way of automating deployments into Production. But hey, you’ve still got an Agile solution, right?

Then the decision is made to outsource the IT infrastructure. Now you’re no longer running Windows Server 2012 and RedHat Linux. Now you’re on Windows Server 2016 and Oracle Linux. Is the agent compatible with Oracle Linux? You make a quick call to the vendor who says it should work (Oracle Linux is just RedHat?) but it’s not certified. So if it fails for any reason, you’re out of luck.

A decision is also made to install DataPower for load balancing. Configuration changes need to be delivered to DataPower as part of the deployment process. Of course, there’s no such thing as an “Agent” for a Web Appliance such as DataPower, so those changes will have to be applied manually. Such changes will need to be co-ordinated with the automated software delivery process. But with spreadsheets and email it should be possible to set something up that isn’t that bad.

Then a new Chief Software Architect joins. He’s young and ambitious and likes the sounds of microservices and containers. He starts a project that breaks up the monolithic code base into a bunch of micro and mini services hosted in various containers. The developers love this idea (looks great on their CV!). Now your deployment needs to potentially target lots of different Docker containers. Can you install an agent in a container? Would that even be practical?

At this point, you take a long hard look at your new infrastructure with its load balancers and containers and a mix of physical, virtual, and cloud based servers and decide that those deployment agents of yours were a really bad idea.

Agents are the Enemy of Agile

If this nightmare sounds familiar to you, you’re not alone.

Agents are the enemy of Agile. They restrict where deployments can be performed and on which platform. They need monitoring to ensure that they are operational. They need to be updated. They need to be installed on every target – even cloud-based servers. Some cloud providers have their own deployment solutions that you need to integrate with, rendering your agent either useless or terminally compromised. And – worst of all – it’s a struggle to convince Production Support to allow them to be installed into the Production environment. Which means not only do you have to perform a manual deployment process on the very environment where an automated, repeatable, controlled, and audited process is required – into production – but you will never achieve Continual Deployment which is the target of DevOps in the first place.

Application Release Automation solutions which use Agents also tend to charge for them. It is not unusual to see tools sold by licensing each endpoint. When you’re deploying to physical servers, that’s acceptable. But nowadays, sites are moving to containers and cloud-based infrastructure, allowing them to spin up potentially hundreds of servers to meet short-term testing spikes. A licensing model using agents cannot keep up with that sort of agile testing process.

That’s why modern ARA solutions such as DeployHub don’t use Agents. Operating over open protocols such as SSH or FTPS also has a number of other advantages:

  • The ports are already open. There are generally no firewall changes to make to communicate with the deployment target.
  • You can deploy into production since the very ports open for administering the Production Servers are the same ports used to deploy the changes. Continual Deployment now becomes possible without the need to install “black box” software into the production servers.
  • You can target any operating system provided it operates over these open protocols. Want to deploy to Mainframe, iSeries, Linux, Windows, OpenVMS and Tandem from a single ARA solution? Then pick a tool that operates without agents.
  • Want to make a configuration change to a WebAppliance over a RESTful or SOAP based API? No agent, no problem – deliver the change over the API and record it as a successful component delivery.
  • Can deploy to appliances such as IBM Datapower and Salesforce.


Deployment Agents are incompatible with an Agile Development and Testing approach. By compromising the ability to deliver Continual Deployment, they’re also the enemy of DevOps. In short, if you want to achieve the full potential of Agile; if you want to implement a mature DevOps solution with Continual Deployment into Production; if you want to reduce your costs and downtime – lose your agents.