Applications are the engines of digital productivity, the workhorses that turn raw processing power into helpful tools that make everyday life easier and that have transformed work, travel, entertainment, and industry. These software programs can range from apps that help order food delivery, guide satellites in space, power assembly lines, and provide the word processing that helped to write this blog.
Applications have evolved greatly over time, developing from basic programs designed for specific computational tasks to dynamic modern apps powered by AI and driven by inputs across distributed environments. By looking at how applications have advanced in recent years, we gain insights into the ways computing has adapted to changing user needs, business requirements, and technological innovations to shape our digital world. But as apps have grown more complex, connected, personalized, accessible, and intelligent, the application attack surface has increased, placing apps in greater risk of malicious threats and exploitation. Put simply, the expanded attack surface presents a larger area of opportunity for cybercriminals, who have kept pace with their own innovations for breaching the latest and greatest cyber defenses.
As applications have evolved from monolithic apps of physical data centers to today’s highly distributed AI apps, a new generation of ADCs is needed that converges high-performance app security and app delivery capabilities into a single platform.
This increasingly complex and expanding threat landscape is further complicated by the lack of a platform that can provide consistent app and API delivery and security. Application delivery controllers (ADCs) have played a central role in the stack to enable app delivery functions, but ADCs now need to evolve to meet the changing needs of today's environments. Together, these factors are driving the demand for new, more comprehensive strategies that combine app security and application delivery services to provide more complete protection and simpler, more unified management across diverse IT ecosystems.
Let’s explore how we arrived at this point, starting with the earliest apps from the 1940s.
The earliest computers back in the 1940s and 1950s didn’t really employ applications as we understand them today. They were largely programmed using machine language and punch cards. The first applications—what we might think of as Applications 1.0—grew up beside the centralized main-frame computers of the 1960s and 1970s, which operated monolithic applications designed to handle tasks such as payroll, inventory, and basic data processing. These early applications were limited to a single codebase, very large, unitary blocks of code that operated within isolated data centers or corporate offices. Think of a remote castle on a hill protected by high walls and a moat, where workers go to gain access to centralized computing resources using computer languages like FORTRAN and COBOL.
An advance to the monolithic application model arrived with three-tiered applications, which divided computing resources into three distinct functional layers, each with separate but interconnected applications. The presentation layer, or the user interface, was governed by desktop applications such as worker productivity software (hello, WordStar), while the business logic layer included an application server that processed user requests and performed computations. The data layer, or database, stored and managed data.
While three-tiered applications provided more modularity and flexibility than monolithic apps, this model remained reliant on physical data center infrastructure and on-premises resources. Monolithic and three-tiered applications remain the norm in some industries, including healthcare, banking, and government, where legacy systems meet strict regulatory requirements and mission-critical applications cannot afford downtime.
The rise of e-commerce in the 1990s ushered in the dot-com era. Application service providers (ASPs) offered cloud-like web hosting services over the World Wide Web where businesses could launch HTML web applications without having to stand up their own infrastructure. Web apps were available globally, anywhere a user had either dial-up or broadband access. These applications were split into three layers: Consumers used internet and web browser software on their home computers to visit web applications hosted on ASP sites, while database servers at the ASP handled purchasing and storage of customer data. Web apps are still dominant in online retail and have been instrumental in driving e-commerce business operations on the web.
The early 2010s was the era of cloud disruption, with many prognosticators predicting the death of the on-prem data center and ascendence of cloud-based computing. Amid the talk of transferring all computing to the cloud, many apps were modernized to live in cloud environments or developed in the cloud itself: cloud-native applications.
These cloud-native apps were optimized for cloud scale and performance, built from the ground up on microservices architectures, which introduced another paradigm shift to app evolution. Microservices-based applications were no longer stand-alone, unified blocks of code but assembled from small, independently deployable microservices that each perform a single function and could communicate with other microservices via APIs. Modular microservices were orchestrated into applications within portable containers, which are virtualized computing instances that are self-sufficient and able to operate consistently across different environments. Microservices-based applications offered many advantages over monolithic or three-tiered apps, including faster app development, improved scalability, increased flexibility, and services reusability.
However, these component microservices are pulled from public cloud libraries and other remote platforms and relayed via APIs or other communication mechanisms. These dispersed elements were difficult to manage, and exposing the API increased the app’s attack surface, opening the door to network intrusions during data transmission and to potential vulnerabilities in third-party libraries.
Today, applications are undergoing another paradigm shift. The predicted death of the data center was greatly exaggerated and many of today’s organizational portfolios maintain a mix of traditional and modern apps that run in hybrid environments and pull from on-prem, edge, and cloud resources. This further complicates the need for a consistent platform to consolidate delivery and security of apps and APIs wherever they reside.
In addition, apps are ever more complex, relying not just on cloud-based microservices but on real-time data flows and new forms of input. Think of a typical travel website application. In addition to tempting photos and content about your chosen destination are real-time, API-accessed engines for booking hotel rooms, air flights, and car rentals; links to live weather forecasts; and interactive mapping functionality. These modern applications are highly interconnected, multi-sourced, and dynamic, relying on data from hybrid clouds and distributed environments to provide a robust customer experience.
The complexity—and attack surface—only increases when AI is added to these already highly integrated applications. AI-powered services, which leverage data from interconnected networks and devices, may include recommendation engines or chatbots; a travel website may suggest personalized destinations and activities based on real-time user profiles and predictive analytics, or offer to monitor dynamic pricing algorithms.
AI applications handle massive amounts of data and employ complex traffic patterns, making large and frequent requests to enterprise data stores and AI models housed in AI factories. AI workloads are usually deployed on distributed infrastructure, including public clouds, edge devices, and hybrid systems, exposing applications to data security and privacy risks, adversarial attacks on AI models, and infrastructure vulnerabilities. These challenges are driving the need to deliver comprehensive security for API-rich applications unified with high-performance traffic management and app delivery services that support modern AI apps and workloads no matter where they are deployed.
The history of applications is in many ways the history of computing itself, evolving from solving basic math computations in the 1940s to today’s complex and dynamic apps powered by AI and driven by inputs from across a galaxy of remote systems. Application history is also reflected in the increasing accessibility and ubiquity of computing hardware. Applications have shifted from use on room-size mainframe computers that only experts could navigate, to desktop computers and laptops with graphical interfaces for personal use and entertainment, and on to mobile devices, driven by web and mobile apps that are accessible anywhere and powered both by touch screens and voice commands.
However, these increasingly intelligent and dynamic applications rely on networks and connections that are susceptible to a wide range of security risks, requiring ever evolving security mitigations and ADCs optimized for distributed environments. What’s needed is a convergence of critical app delivery and security functions to better manage this new landscape of traditional apps, modern apps, and AI-powered apps, all operating across a distributed environment. We believe ADCs are the appropriate technology to manage this. But to fulfill this role, ADCs will need to evolve to help reduce the operational complexity and risk associated with today’s disjointed digital ecosystems.
To learn more, read our previous blog post in the series, “Reinventing the ADC to Meet the Demands of an Evolving Application Infrastructure,” Also, stay tuned for our next blog to learn how application security has evolved in line with changes in application architectures and why this demands a new era of ADCs.