Consumer Things. Business Things. Manufacturing Things.
Phones. Tablets. Phablets. Laptops. Desktops.
Compute intense. Network intense. Storage intense.
Home. Work. Restaurant. Car. Park. Hotel.
The landscape is shifting on both sides of the business – apps and clients. The term “user” no longer means just a human being. It includes systems and things, as well, that are driven automatically to connect, share, and interact with applications across the data center.
Consider, for example, the emerging microservices architectural trend, which breaks up monolithic applications into its composite parts. Each part is its own service and presents an API (interface) through which other parts (services) and ‘users’ can communicate.
Not one of the 36% dabbling in microservices? (Typesafe, 2015) Sticking with well-understood app architectures won’t insulate you from the impact of the increasing diversity of ‘users’, especially if you’re diving in to the Internet of Things. Our data says some of you are, with 22% of all respondents believing it will be of strategic importance for the next 2-5 years and 15% getting a head start with plans to purchase technology to support IoT in the next 12 months.
That means “things” will have to viewed like “users”, having their own unique set of needs and requirements with respect to security and performance, not to mention availability.
That means that network and application services tasked with delivering an increasingly diverse set of applications to a growing set of clients in even more locations need to be able to differentiate between a human user and a thing user. To optimize performance and ensure security, it is imperative that the services responsible for performance and security are able to apply the right policy at the right time given the right now set of variables.
That means they need to manage traffic (data and communications, in app terms) within the context of the entire transaction: the user, the app, and the purpose for which such communication is being attempted.
You can think of context much in the same way you might have been taught (if you’re old enough, and no, you don’t have to admit it if you’d rather not) about the five “Ws” you need to ask when you’re doing basic information gathering: who, what, where, when, and why. By interrogating traffic and extracting an answer to each of these questions, you can piece together enough context to be able to make an appropriate decision regarding how to treat the exchange. Deny it. Allow it. Scan it. Scrub it. Optimize it. Route it. These are the types of things app services do “in the network” and they do it better and with greater effect if they do it within the context of the exchange.
These are what make up context. It’s not necessarily the case that you need to collect all five (this isn’t Pokémon, after all) to be able to make a decision regarding the proper course of action to take. But you do need to have visibility (access) to all five, in case you do. That’s why visibility into the entire network stack – from layers 2 through 7 – are so important to app services. Because each one may need to evaluate a request or response within the context it was made, and only by having visibility into the full stack ensures you can reach in and grab that information when it’s needed.
That’s one of the things that an intelligent proxy brings to the table; the visibility necessary to ensure that network, security, and infrastructure architects (and engineers) can implement policies that require context to ensure the security, speed, and reliability every user – no matter whether human, sensor, or software – ultimate needs.