Back in 2013, we were introduced to the concept of an immutable server. An immutable server is, as the term immutable suggests, static. Its configuration is fixed and cannot (or at least should not) be changed. If changes are required, a new server with the new configuration replaces the running server. The reason this is desirable, particularly with cloud and highly-automated on-premises environments, is that it simplifies configuration and improves reliability of the automation systems driving deployments.
This concept plays out well in cloud and containerized environments even for network and application services, but not so well in traditional, shared infrastructure architectures.
That’s because the definition of shared infrastructure implies multiple, running services. Based on F5 iHealth data, multiple can mean an average of 123 individual configurations (virtual servers). It is neither practical or suggested that you attempt to stop and redeploy 122 of those virtual servers to make a single change to one virtual server.
But that does not render the concept impractical nor undesirable. The key to adopting immutable infrastructure along with your automation and infrastructure as code (IaC) systems is to move to a per-app architecture.
Why would you undertake such a significant shift in corporate network architecture? Let me quote myself, because I can’t think of a better way to rephrase today:
Because, entropy.
This law also applies to systems for which firmware or system-level updates must be applied. For which hot fixes and patches are deployed. For which emergency tweaks to configuration that should, in a perfect world, be only changed through a strictly followed change management process. The problem immutable (disposable) infrastructure is trying to solve is that the more change you introduce into a system, the more crufty and unstable they seem to grow. Disorder. Chaos. Entropy.
It’s not just about changes to the configuration in service to an app or to deploy emergency virtual patches for some recently discovered vulnerability. Those are good reasons to change an application service configuration, but they aren’t the only ones. Hot fixes, patches, and version dependencies are also good reasons you might need to change one of the 123 configurations running on shared infrastructure.
By shifting to a per-app architecture, you eliminate the potential for disruption by one or two or even ten of those instances to the other hundred running on shared infrastructure. Giving each app its own data path, essentially, sets the stage to support an immutable infrastructure approach that will better support the move toward an automated, infrastructure as code-based deployment practice.
This means a wholly software-based application service pipeline – with application services deployed in what is very much a “micro-application-service” architecture, akin to how microservices are deployed within containers.
Julian Dunn of Chef put it well in his blog - Immutable Infrastructure: Practical or Not?
So if we apply that to the application services that are most tightly coupled (affine) to a given application, you wind up with a two-tiered network architecture comprising core, common shared services (like network DDoS and access via traditional port-based firewalls) that feed into a per-application “stack” that is treated as immutable and deployed/managed using infrastructure as code concepts.
But you really can’t do immutable without a per-app infrastructure because you need to decouple the relevant application services from their shared platforms first. If you can use the same platform – just in a software form-factor – this process becomes even easier because you’ve already got a lot of the knowledge and toolsets you need in place to go full-steam ahead toward a per-app, immutable model.
Even if you aren’t considering true immutable infrastructure, the ability to leverage it when it makes the most sense (new infrastructure version, hot fixes, patches, etc…) will make life easier for both you and the DevOps owners of the app the infrastructure is supporting.