This componentization of IT is like the componentization of the applications it is tasked with securing and delivering. It's estimated that 80 to 90% of modern applications are composed of third-party components, most of which are open source. The benefits to doing so include speed, responsiveness to change (agility), and a reduction in the cost to create the software. After all, if someone else already wrote the code for a wheel, why reinvent it?
There are no estimates as to just how componentized IT may be today, but the answer to how componentized will it be in the future is clear: Very.
We don't build our own monitoring systems anymore. We adopt one, like Prometheus. We don't develop our own search engines; we integrate with Elastic Search or Lucerne. We don't have to design and develop formation and infrastructure controllers; we have Helm and Terraform. We're no longer asked about integrating with systems; we are asked about our support for ecosystems.
We build systems out of a software stack rather than developing each component ourselves.
This system-level thinking is pervasive in development and it's beginning to have a profound impact on the way all software—commercial and custom—is developed. It is also having a significant impact on the way we architect the network.
A few years ago I noted that microservices were breaking up the network. This remains a break-up in progress, for reasons that are closely tied to the mindset of DevOps. That is, DevOps is more likely to think in terms of componentized systems, particularly when influenced by cloud. As DevOps continues to encroach on traditional NetOps and operations turf, they bring with them their way of thinking. That means stacks instead of solutions.
This perspective leads naturally to the adoption of individual application services that better fit the mode of operation and thinking in which DevOps operates today. Single-purpose, functionally focused application services are used to compose a data path rather than construct one.
That means load balancing is load balancing. Ingress control is ingress control. And an API gateway is an API gateway. With a variety of application services, operational artisans compose (assemble) a data path that stretches from code (the app) to the customer (the client).
We can see this in the extraordinary adoption rates of targeted services like API gateway, ingress control, and bot defense we saw in this year's State of Application Services report.
This shift has not gone unnoticed. Just as digital transformation continues to force business to redefine itself and decompose into services represented by APIs and applications (digital capabilities), it dramatically changes the way we design, develop, and deliver application services.
IP-based routing has always been the way data paths are architected. Route this traffic here, and this type of traffic there, and if there's something in the payload that matches X then route the traffic over there. It's very network specific and thus tightly couples the data path to the network on which it's deployed.
That makes it difficult to replicate in other environments, like a public cloud. While you can likely reuse policies, you won't be able to take advantage of the configuration binding the data path to the network.
Both containers and cloud are basically forcing data paths to move up the stack and be assembled at the application layer from application services. That's much more portable across environments because you're operating on metadata like host names or tags and labels that are not bound to the network.
Ultimately that means we need to shift away from configurations to policies that can assemble data paths without being bound to IP addresses and environments.
There is no doubt that we're moving from solutions to stacks, from manual processes to pipelines. As we expand our digital capabilities across business and operations, the need for composition and control over the data path will continue to move up the stack and rely more heavily on the app services that direct it.