It’s an application world. One of the consequences, intended or not, is a change in how we measure success. Today’s measurements are in downloads and installs instead of foot traffic; in microseconds and uptime percentages instead of cost per square foot. That means performance is king and the Praetorian guard is the infrastructure put in place to ensure performance is preserved.
To ensure performance implies you have a (near) real-time view of performance. After all, if you don’t know it’s broken you can’t fix it. To know it’s broken you need to be monitoring and measuring the application experience of the customers and employees with whom you engage and do business. And yet research indicates this is not necessarily the case. According to research from Copper Egg more than half (54%) of organizations only monitor a relatively small portion of their apps. 25% or less, to be precise.
To be sure, we need to be more vigilant about monitoring and measuring performance and availability. The two are intimately related in that a key component of availability is performance. Poorly performing apps are abandoned, cursed, and deleted with about as much care as a used candy wrapper. We could cite many studies to prove that point but for the benefit of the 99% of those who’ve already seen the infographics and read the reports, let’s not. Suffice to say that performance is critical and whether we include it in “uptime” or not is a matter of operational policy, not a reflection of reality.
Every microsecond of delay is potentially costing the business money, either in lost productivity or eventually, profit. Time is money in this game of applications and it’s up to IT as a whole to collaboratively design and implement architectures supportive of the need to measure and monitor application performance and availability. That means understanding what we’re actually actually measuring and how the numbers impact performance and availability so that through analysis of the data we can take the appropriate corrective action to meet or hopefully exceed user expectations of an application experience.
Don’t Ping Me, Bro
That means moving past simplistic monitoring and measurement techniques. Using a ping to determine uptime of an application, for example, provides no value in terms of measuring performance and very little with respect to availability. As we move even further beyond virtualization into containerization, monitoring of shared systems will continue to degrade in value and force monitoring and measurement up the stack, toward the applications upon which the business is now reliant. Moves toward microservices, too, will have an impact on how and what we measure.
That means re-evaluating both what and how apps are monitored and measured and how that data can be fed back into systems to enable adjustment when necessary. Individual system performance and availability is important but when the “app” is distributed and comprised of multiple services then it’s necessary to start measuring the “app” based on all its parts.
Applications and architectures have evolved. It’s (perhaps past ) time to evolve the strategies in place for monitoring and measuring their performance and availability, too.
You can read more about what to measure (and why) in “Measuring and Monitoring: Apps and Stacks”