F5 2025 Technology Outlook

Written by

Lori MacVittie, Distinguished Engineer
Laurent Quérel, Distinguished Engineer
Oscar Spencer, Principal Engineer

Ken Arora, Distinguished Engineer
Kunal Anand, Chief Innovation Officer
James Hendergart, Sr. Dir. Technology Research

While generative AI has certainly had one of the biggest impacts on the enterprise in 2024, it is not the only trend or technology making waves. A maelstrom of costs and complexity are driving repatriation of workloads from the public cloud, most notably storage and data. No doubt this movement is also driven by a need to get the enterprise data house to take advantage of the promises of AI.

This leaves enterprises with a hybrid IT estate spread across public cloud, on-premises, and edge computing. While we anticipate significant shifts in workloads from the public cloud to on-premises, we do not believe enterprises will go “all in” on any location. They will remain as they have been, staunchly hybrid.

This leaves the enterprise with significant challenges to standardize security, delivery, and operations across disparate environments. The confusing array of APIs and tools remain an extant threat to the stability and scale of digital business.

This is the context in which F5 explores technologies and how they might impact the enterprise and, subsequently, the impact on application delivery and security. The consequences inform our plans, our strategy, and further exploration of emerging technology. With that perspective, a select group of F5 technology experts offer their insights on the five key technologies we believe will have the biggest impact on the enterprise and, thus, application delivery in 2025.

2025 Technology #1: WebAssembly

This reality is driving the first of our technologies to watch in 2025, that of WebAssembly (Wasm). Wasm offers a path to portability across the hybrid multicloud estate, delivering the ability to deploy and run applications anywhere a Wasm runtime can operate. But Wasm is more than just a manifestation of the promise for cross-portability of code. It offers performance and security-related benefits while opening new possibilities for enriching the functionality of browser-based applications.

Oscar Spencer, Principal Engineer explains:

WebAssembly in the browser isn’t expected to undergo drastic changes throughout 2025. The most significant update is the continued support for WebAssembly Garbage Collection (GC), which has already been integrated into Chrome. This will benefit languages like Dart and Kotlin that rely heavily on GC and are looking to expand their presence in browser environments. There is also potential for improvements in Python's usage within browsers, although it’s still early to predict the full impact.

The bigger developments, however, are happening outside of the browser with the release of WASI (WebAssembly System Interface) Preview 3. This update introduces async and streams, solving a major issue with streaming data in various contexts, such as proxies. WASI Preview 3 provides efficient methods for handling data movement in and out of Wasm modules and enables fine-tuned control over data handling, like modifying headers without processing entire request bodies. Additionally, the introduction of async will enhance composability between languages, allowing for seamless interactions between async and sync code, especially beneficial for Wasm-native languages. As WASI standards stabilize, we can expect a significant increase in Wasm adoption, providing developers with robust tooling and a reliable platform for building on these advancements.

Assuming Wasm can solve some of the portability issues inherent in previous technologies, it would shift the portability problems 95% of organizations struggle with today to other critical layers of the IT tech stack, such as operations.

Racing to meet that challenge is generative AI and the increasingly real future that is AIOps. This fantastical view of operations—changes and policies driven by AI-based analysis informed by full-stack observability—is closer to reality every day thanks to the incredible evolutionary speed of generative AI.

2025 Technology #2: Agentic AI

In the space of less than a year, agents have emerged to replace AI functions. Dubbed Agentic AI, this capability stands poised to not only reshape operations but displace entire enterprise software markets. One need only look to the use of AI to automate workflows that have been dominated by SaaS for nearly two decades to see how disruptive this capability will be.

Laurent Quérel, Distinguished Engineer explains:

Autonomous coding agents are poised to revolutionize software development by automating key tasks such as code generation, testing, and optimization. These agents will significantly streamline the development process, reducing manual effort and speeding up project timelines. Meanwhile, the emergence of Large Multimodal Agents (LMAs) will extend AI capabilities beyond text-based search to more complex interactions. These agents will interact with web pages and extract information from various formats, including text, images, and videos, enhancing the ways we access and process online content.

As AI agents reshape the internet, we will see the development of agent-specific browsing infrastructure, designed to facilitate secure and efficient interactions with websites. This shift could disrupt industries like e-commerce by automating complex web tasks, leading to more personalized and interactive online experiences. However, as these agents become more integrated into daily life, new security protocols and regulations will be essential to manage concerns related to AI authentication, data privacy, and potential misuse. By 2028, it is expected that a significant portion of enterprise software will incorporate AI agents, transforming work processes and enabling real-time decision-making through faster token generation in iterative workflows. This evolution will also lead to the creation of new tools and platforms for agent-driven web development, marking a significant milestone in the digital landscape.

But the truth is that to fully exploit the advantages of AI, one needs data—and a lot of it. A significant challenge given that nearly half (47%) of organizations admit to having no data strategy for AI in place. It is not a trivial challenge. The amount of data held by an organization—structured, unstructured, and real-time metrics—is mind-boggling. Simply cataloging that data requires a significant investment.

2025 Technology #3: Data classification

Add in security concerns due to dramatically increasing attack surfaces, new regulations around data privacy and compliance, and the introduction of new data sources and threat vectors, and you have a perfect storm for the rise of robust, real-time data classification technologies. To wit, generative AI models are expected to surpass traditional rule-based systems in detecting and classifying enterprise data.

James Hendergart, Sr. Dir. Technology Research explains:

Data classification surged in importance in 2024 due to several converging trends. The explosion of data, devices, and applications, along with ongoing digital transformation, dramatically increased the attack surface for cyber threats. This rise in vulnerabilities, coupled with persistent data breaches, underscored the critical need for robust data protection. At the same time, expanding regulations aimed at protecting data privacy and ensuring compliance further pushed organizations to prioritize data classification because classification is the starting point for privacy. Additionally, the rise of generative AI introduced new data sources and attack vectors, adding complexity to data security challenges.

Roughly 80% of enterprise data is unstructured. Looking ahead, generative AI models will become the preferred method for detecting and classifying unstructured enterprise data, offering accuracy rates above 95%. These models will become more efficient over time, requiring less computational power and enabling faster inference times. Solutions like Data Security Posture Management (DSPM), Data Loss Prevention (DLP), and Data Access Governance will increasingly rely on sensitive data detection and classification as a foundation for delivering a range of security services. As network and data delivery services converge, platform consolidation will drive vendors to enhance their offerings, aiming to capture market share by providing comprehensive, cost-effective, and easy-to-use platforms that meet evolving enterprise needs.

The shared desire across organizations to harness generative AI for everything from productivity to workflow automation to content creation is leading to the introduction of a new application architectural pattern as organizations begin to deploy AI capabilities. This pattern expands the traditional three tiers of focus—client, server, and data—to incorporate a new AI tier, where inferencing is deployed.

2025 Technology #4: AI gateways

This new tier is helping to drive the definition of AI Gateways, the fourth of our technologies to watch. AI gateways are not just API gateways or web gateways. While its base capabilities are like that of API gateways, the especial architectural needs of bi-directional, unstructured traffic and a growing user base of ‘good’ bots requires new capabilities.

Ken Arora, Distinguished Engineer explains:

AI gateways are emerging as the natural evolution of API gateways, specifically tailored to address the needs of AI applications. Similar to how Cloud Access Security Brokers (CASBs) specialize in securing enterprise SaaS apps, AI gateways will focus on unique challenges like hallucinations, bias, and jailbreaking, which often result in undesired data disclosures. As AI applications gain more autonomy, gateways will also need to provide robust visibility, governance, and supply chain security, ensuring the integrity of the training datasets and third-party models, which are now potential attack vectors. Additionally, as AI apps grow, issues like distributed denial-of-service (DDoS) attacks and cost management become critical, given the high operational expense of AI applications compared to traditional ones. Moreover, increased data sharing with AI apps for tasks like summarization and pattern analysis will require more sophisticated data leakage protection.

In the future, AI gateways will need to support both reverse and forward proxies, with forward proxies playing a critical role in the short term as AI consumption outpaces AI production. Middle proxies will also be essential in managing interactions between components within AI applications, such as between vector databases and large language models (LLMs). The changing nature of threats will also require a shift in how we approach security. With many clients becoming automated agents acting on behalf of humans, the current bot protection models will evolve to discriminate between legitimate and malicious bots. AI gateways will need to incorporate advanced policies like delegated authentication, behavior analysis, and least privilege enforcement, borrowing from zero trust principles. This will include risk-aware policies and enhanced visibility, ensuring that AI-driven security breaches are contained effectively while maintaining robust governance.

Most pressing are the ability to not only address traditional security concerns around data (exfiltration, leakage) but ethical issues with hallucinations and bias. No one is surprised to see the latter ranked as significant risks in nearly every survey on the subject.

2025 Technology #5: Small Language Models

Given the issues with hallucinations and bias, it would be unthinkable to ignore the growing use of retrieval-augmented generation (RAG) and Small Language Models (SLMs). RAG has rapidly become a foundational architecture pattern for generative AI, particularly due to its ability to enhance the specificity and accuracy of information produced by large language models. By combining the strengths of retrieval systems with generative models, RAG provides a solution to one of the key challenges in AI: hallucinations or generating incorrect or misleading information.

Organizations not already integrating RAG into their AI strategies are missing out on significant improvements in data accuracy and relevancy, especially for tasks requiring real-time information retrieval and contextual responses. But as the use cases for generative AI broaden, organizations are discovering that RAG alone cannot solve some problems.

Lori MacVittie, Distinguished Engineer explains:

The growing limitations of LLMs, particularly their lack of precision when dealing with domain-specific or organization-specific knowledge, are accelerating the adoption of small language models. While LLMs are incredibly powerful in general knowledge applications, they often falter when tasked with delivering accurate, nuanced information in specialized fields. This gap is where SLMs shine, as they are tailored to specific knowledge areas, enabling them to deliver more reliable and focused outputs. Additionally, SLMs require significantly fewer resources in terms of power and computing cycles, making them a more cost-effective solution for businesses that do not need the vast capabilities of an LLM for every use case.

Currently, SLMs tend to be industry-specific, often trained on sectors such as healthcare or law. Although these models are limited to narrower domains, they are much more feasible to train and deploy than LLMs, both in terms of cost and complexity. As more organizations seek solutions that better align with their specialized data needs, SLMs are expected to replace LLMs in situations where retrieval-augmented generation methods alone cannot fully mitigate hallucinations. Over time, we anticipate that SLMs will increasingly dominate use cases where high accuracy and efficiency are paramount, offering organizations a more precise and resource-efficient alternative to LLMs.

Looking Ahead: Beyond transformers

The need for more efficient AI models that can handle the growing complexity of modern applications without requiring enormous computational resources is fast becoming an extant one. Transformer models, while powerful, have limitations in scalability, memory usage, and performance, especially as the size of AI models increases. As a result, there is a strong push to develop architectures that maintain high accuracy while reducing computational overhead. Additionally, the demand for democratizing AI—making it accessible across various devices and use cases—further accelerates the adoption of innovations like 1-bit large language models, which are designed to optimize precision while minimizing hardware requirements.

These needs are driving the evolution of AI to go beyond transformers.

Kunal Anand, Chief Innovation Officer explains:

A new paradigm is emerging: converging novel neural network architectures with revolutionary optimization techniques that promise to democratize AI deployment across various applications and devices.

The AI community is already witnessing early signs of post-transformer innovations in neural network design. These new architectures aim to address the fundamental limitations of current transformer models while maintaining or improving their remarkable capabilities in understanding and generating content. Among the most promising developments is the emergence of highly optimized models, particularly 1-bit large language models. These innovations represent a fundamental shift in how we approach model efficiency, offering dramatic reductions in memory requirements and computational overhead while maintaining model performance despite reduced precision.

The impact of these developments will cascade through the AI ecosystem in multiple waves. The primary effects will be immediately apparent in the reduced resource requirements for AI deployment. Models that once demanded substantial computational resources and memory will operate efficiently with significantly lower overhead. This optimization will trigger a shift in computing architecture, with GPUs potentially becoming specialized for training and fine-tuning tasks while CPUs handle inference workloads with newfound capability.

These changes will catalyze a second wave of effects centered on democratization and sustainability. As resource requirements decrease, AI deployment will become accessible to various applications and devices. Infrastructure costs will drop substantially, enabling edge computing capabilities that were previously impractical. Simultaneously, the reduced computational intensity will yield environmental benefits through lower energy consumption and a smaller carbon footprint, making AI operations more sustainable.

These developments will enable unprecedented capabilities in edge devices, improvements in real-time processing, and cost-effective AI integration across industries. The computing landscape will evolve toward hybrid solutions that combine different processing architectures optimized for specific workloads, creating a more efficient and versatile AI infrastructure.

The implications of these developments extend far beyond technical improvements. They suggest a future where AI deployment becomes more versatile and environmentally conscious while maintaining performance. As we move toward 2025, these changes will likely accelerate the integration of AI into everyday applications, creating new opportunities for innovation and efficiency across industries.

The Only Constant Now Really is Change

The past year has certainly been one of significant change, evolution, and surprises in technology. It is reasonable to believe that next year will bring more of the same. The full potential of generative AI has not yet been explored, after all, and that means it is likely that additional, disruptive use cases for this exciting technology will emerge.

If organizations are not already experimenting with generative AI, they should be. The use of services is certainly providing a good starting point for basic use cases like chat bots and recommendation engines, but the potential for generative AI goes far beyond conversations and generating new cat videos.

Expect to see more change and new ways to harness AI continue to emerge as AI continues to re-shape the very foundations of technology.