The State of Application Strategy: What’s Really Going on with Multicloud?

Written by: Lori MacVittie

One of the things Cindy Borovick and I get to do every year is survey the market at large about the State of Application Strategy (SOAS). Our focus is, of course, on the potential impact—or disruption—to application delivery.

Because changes in applications—how they’re constructed, how they communicate, what data they exchange, where they’re deployed, and how they’re operated—have profound impacts on application delivery.

In our research this year, we continued to see the settling of enterprises on a hybrid, multicloud estate. That is, their heterogenous applications (traditional and modern) are distributed across a heterogenous infrastructure (core, cloud, and edge).

What’s also interesting is that organizations continue to repatriate workloads from the public cloud to their traditional data centers. That trend has continued to loom large in conversations across the industry, culminating this year in an almost frantic rush to understand why.

So, when Cindy and I were deciding what focus we should take in our fourth and final “mini” survey this year under the SOAS banner, we immediately tagged multicloud as the central theme.

We were not disappointed with the results. While we won’t be putting together a formal report for this one—hey, we’re working on SOAS 2025 right now, too—we wanted to make sure we shared some of the insights we gleaned from this latest survey. So, without further ado, let’s dig in.

Repatriation is Real

We have seen over several years in our core research that repatriation is real. But until this year, the rest of the industry seemed content to ignore it or claim it was largely due to failed “lift and shift” efforts. While we grant failed efforts contribute to repatriation, we wanted to understand how much it contributed. So, we dug in deeper to ask organizations why they repatriated workloads.

It turns out that security and cost top the list of reasons for repatriation of applications from the public cloud to on-premises.

Driving this may be a lack of expertise: 68% of respondents agreed it was difficult to find staff who could execute on a multicloud security strategy.

Now, I specifically call out the movement of applications from public cloud to on-premises because organizations are also moving applications between public clouds—and for different reasons.

When applications migrate between public clouds, it is often architecture that drives the decision, with cost coming in a close second. Security is rarely the reason, with fewer than one in five citing it as a reason to switch public cloud providers. Even performance is more likely than security to drive a migration.

Taken together, what repatriation means is migration of workloads post-deployment. It’s not a temporary change of address, it’s a permanent shift from one location to the other for reasons that only surface after the fact.

All this points to a much bigger problem: the inability to identify the best location for an application pre-deployment. A lack of visibility into how an application will perform, what the cost will be, and the ability to secure it all contribute to this trial-by-error approach to application deployment.

That said, none of this changes the reality of a hybrid multicloud IT estate. Organizations are still relying on core, cloud, and edge for their applications and plans with respect to AI applications and models only reinforce that reality. Nearly all respondents—97%—agree that “one size fits all” does not exist for application deployment.

The question is, what is the right mix then?

Less than one-third (30%) of organizations view their “ideal state” as 100% of applications in the public cloud and a mere 6% see 100% on-premises as ideal. The majority idealize some mix of applications in both types of locations, with more than one-third (36%) targeting 80% cloud and 20% on-premises, and another 15% looking at an even split of 50% cloud and 50% on-premises.

Choosing the Right Location

If, as we posit, the default state of an organization is applications in all three types of environments—core, cloud, and edge—then the next question is how to match applications with locations. The better we get at doing that pre-deployment, the less repatriation and migration will occur. As neither effort is free, being able to identify the right location before deployment would certainly lower the total cost of ownership of an application over its lifetime.

So, we asked multiple questions to come up with a “profile” for each location. One of the factors we wanted to understand is the primary reason for choosing the location. We offered six different criteria:

  1. Scalability
  2. Speed to deploy
  3. Application performance
  4. Ease of operations
  5. Compliance/regulatory
  6. Proximity to data

It turns out that each location has a unique profile with very different criteria topping the list of reasons to choose it.

Public cloud is also chosen for speed to deploy, while ease of operations and proximity to data drive decisions toward on-premises. But interestingly, proximity to data also drives decisions toward edge computing, at least when paired with performance and scalability.

Ultimately there is no single factor driving decisions toward one location over another, but rather a combination of factors that result in a choice.

While public cloud may indeed view on-premises as a “competitor,” per recent repatriation discussions in the industry, the reality is that all three locations serve a purpose for an increasingly distinct set of applications.

What remains now is to understand application profiles and match those to characteristics best suited by each location. As an industry, we’re fairly good at identifying that applications serving IoT devices are likely a good fit for edge computing. Similarly, we understand which applications are best served by a SaaS over a packaged alternative.

But when it comes to other applications—AI chatbots, copilots, assistants, mobile apps, web apps, enterprise apps—the answers are often less clear, with little consensus.

Observability is the Answer

I remain convinced that a significant contributor to this lack of consensus is incomplete observability. Organizations are unable to build a profile for any application because they lack metrics that matter to that profile. A more complete picture—inclusive of costs—will ultimately enable AI models to predict the best location for an application simply by profiling it.

But we continue to hear, through our research, that lack of visibility remains a significant challenge—particularly for organizations operating in multiple locations. This is largely thanks to delivery and security sprawl, with disparate tools and services employed per location. With no common layer to generate the metrics that matter, organizations must become data science experts simply to unravel the simplest of statistics with respect to cost, performance, and security.

Unsurprisingly, cost efficiency and visibility were the top two benefits of a cloud-agnostic approach to application delivery. Put visibility first and they remain the top two benefits of a cloud-agnostic approach to application security, as well. That would be huge, as the top pain point for organizations operating across multiple locations was mitigating zero-day threats with 51% of respondents tagging it. But troubleshooting distributed application issues came in as a close second at 50%.

As organizations grapple with the complexities of a hybrid, multicloud IT estate, the path to an optimized application strategy hinges on visibility. Complete observability isn’t just a buzzword—it’s the foundation for informed, pre-deployment decisions that minimize costly repatriation and migration efforts. Without unified metrics across environments, enterprises remain mired in a trial-by-error approach to application deployment, exacerbated by security, performance, and cost challenges unique to each location.

The solution lies in a cloud-agnostic, unified platform that consolidates application delivery and security data across all environments. With a common layer for observability, organizations can shift from reactive troubleshooting to proactive optimization, driving efficiency and mitigating critical threats with confidence.

The road to a stable, scalable, and secure hybrid IT landscape is paved with visibility. By investing in tools that unify observability across core, cloud, and edge, organizations can finally bring order to the complexity of multicloud, ensuring that each application finds its ideal home—wherever that may be.