All White Papers

White Paper

How the F5 Application Services Fabric Can Benefit Your Business

Updated December 15, 2014

Applications are at the heart of everything we do, keeping us connected, informed, and entertained. They are infused into nearly all aspects of our daily lives—monitoring our sleep, streaming our favorite music, connecting us with our friends, or delivering breaking news. This application-driven lifestyle has paved the way not only for greater innovation but also for greater demands. People expect fast, reliable, anytime access to services and information from the device of their choice.

In today’s marketplace, businesses that can deliver the always-on, always-fast and always-available-on-any-device experience people expect stand to reap great rewards—from greater customer satisfaction to decreased time to market. However, achieving greater performance levels introduces new considerations and complexities as well as applications requiring additional assistance from the network.

Functions such as network and application security, encryption, acceleration, or load balancing are there to make our applications better—faster, more efficient, reliable, and secure. Collectively, these features are known as application services. In most cases, these services are supplied from devices called Application Delivery Controllers (ADC), which are usually specialized physical or virtual appliances acting as full application proxies. With the demand for more applications and the subsequent need for application services, it’s not surprising that the market for application delivery controllers is projected to be worth more than $2.1 billion by 2017.1

A foundation of success

F5 has dominated the ADC market,2 with tens of thousands of customers successfully deploying the F5® BIG-IP® platform to make their applications faster, more secure, and highly available. BIG-IP products have frequently been deployed in active-passive highly available (HA) pairs. This model, having remained broadly unchanged for more than ten years, has been highly successful for many IT infrastructure components—database servers, firewalls, routers, etc. However, this architecture has its limitations.

HA pairs offer static islands of capacity with no workload migration between the separate pairs. While throughput is limited to the maximum of one (admittedly now very large) device, systems must hold 50 percent of available capacity in reserve to deal with device failure. This, coupled with growing cloud technology adoption, requires a new model for application delivery—one that will integrate with the evolving architectures the cloud is driving.

Moving towards the cloud

The way that organizations deploy and scale their applications is evolving due to the cloud. More and more organizations are moving toward a cloud or cloud-like infrastructure, where server and network virtualization are combined with automation and orchestration. This enables a business to deliver new applications both faster and more cost-effectively, driving greater utilization of the infrastructure and reducing the time and overhead in provisioning new services. These infrastructures might be true public clouds, wholly private clouds, or a private region of a public infrastructure (a virtual private cloud). Many organizations will use multiple cloud infrastructures to deliver their applications, choosing the most appropriate location for app hosting.

The business benefits of greater efficiency, increased scalability, and faster time to value have been widely discussed and are born from the near-ubiquitous use of cloud services, in one form or another, by today’s enterprises.3 As organizations and service providers realize the benefits of a virtualized, automation-ready compute infrastructure, the need to virtualize and automate the underlying networking has become clear, leading to the development of software-defined networking and network overlays such as VXLAN and NVGRE. The streamlining and automation of server and networking deployment have contributed to the adoption of DevOps methodologies as the rapid creation, migration, or destruction of production-analogous environments allow software to be developed, tested, and deployed faster and more efficiently than ever before.

A new application delivery environment

Although the platform on which applications run may be changing, the applications themselves still need the additional security, availability, and performance that application services offer. The delivery mechanism and architecture of application services must change to meet the new application infrastructure designs. Application services will require the same kind of elastic, virtualized infrastructure as the applications themselves.

These services must be ready to integrate into the same self-service orchestration frameworks that are used for the rest of the infrastructure. If the true benefits of automation, efficiency, and improved software deployment are to be achieved, all the components in the application delivery stack must align with the cloud model. Automating the deployment of servers, storage, and networking—but not load balancing—application firewalling, or identity services will reduce the return on investment for any cloud infrastructure. Application services need to be just as software-defined as any other component in the stack.

The challenge here is clear: applications still require services from the network, but the way that these services are created and delivered must adapt to match the new application delivery environment.

Realizing the software-defined data center

This need has driven the F5 High-Performance Services Fabric (HPSF) vision. HPSF brings the ability to deploy software defined application services into an all-active, multi-tenant services fabric. This fabric creates a powerful connector between the orchestration of cloud application deployments and software-defined networking (SDN). As a result, application services such as web application firewalls or application acceleration can be deployed programmatically into an architectural layer rather than manually configured onto a single pair of devices. With the development of the F5 HPSF, organizations can move another step toward the software-defined data center and achieve the promised cost and operational benefits.

The requirements of a high-performance services fabric can be broken down to four key characteristics:

  • Multi-tenant
  • Scalable and highly available
  • Automation ready
  • Ubiquitous

Multi-tenant

To generate the benefits of efficiency and increased utilization that a consolidated resource pool of devices can bring, the F5 High-Performance Services Fabric is designed to be used by multiple business units (or customers) with multi-tenant capabilities. Multi-tenancy allows scaling of multiple groups or customers onto a common infrastructure by creating perimeters around application workloads—while also segregating control plane access. These safeguards are key for driving adoption of a shared platform and realizing return on investment.

Depending on the level of isolation, multi-tenant solutions can create strict failure boundaries, where software or configuration faults cannot affect other tenants of the infrastructure. Multi-tenant solutions also generally offer some sort of resource allocation and control so that one tenant is restricted in the amount of resources it can use. Security and traffic isolation are vital if tenants connecting to different security environments are to be collocated in the same fabric—robust traffic separation is a nonnegotiable component.

Scalable and highly available

Having multiple tenants or applications consolidated requires greater levels of scalability and availability than that of a static active-passive model—the impact of running out of resources or overall platform failure has dramatically increased. A high-performance services fabric must have the ability to grow in capacity to maintain performance and offer a robust, fault-tolerant platform for services. Scaling models that involve major disruption or service outage cannot be tolerated—the maintenance window of this type of model is effectively zero.

Organizations need to be able to select the scaling mechanism that best suits their business or application. Traditionally, organizations faced with a performance bottleneck had to perform a “rip and replace” upgrade with higher-capacity appliances. Now they can pick the scaling model that best suits their needs—be it scaling up through hardware or software licenses, or scaling out through adding additional nodes and migrating workloads onto the new capacity. These options allow smoother capital expenditure, far more flexibility, and reduced risk. Essentially, less initial capacity can be purchased, as the scaling process is far less disruptive.

Automation ready

A scalable, available, and multi-tenant platform provides organizations with a powerful tool for delivering services that enhance the security, performance, and availability of their applications. However, truly benefiting from this architectural layer requires that the services be easy to consume. Application service creation must be integrated with the same tools and systems that drive automation through the rest of the data center. With the rise of software-defined networking toolsets, virtualized servers, and DevOps-driven software deployment, there is no place for an infrastructure layer that is not part of the automation ecosystem. The bottleneck it would create in data center workflow would offset any value created by the services.

Integration with orchestration tools streamline the creation and destruction of complete application environments, with server, networking, and storage components being created and configured as part of a single workflow. Adding application services such as security, access, or acceleration enable the creation of a “full stack” in the application environment. This results in the ability to deploy, test, and manage applications dramatically faster, as all the components required can be deployed rapidly and in a repeatable configuration.

Ubiquitous

The final requirement of the high-performance services fabric is the ability to be ubiquitous—services should be available wherever the applications require them. Highly valuable as they are, the lack of availability of application services in a particular virtualization platform, public cloud, or network overlay must not act as a barrier to innovation or agility within an organization. The high-performance services fabric must be available across multiple hypervisors and public IaaS offerings. Where the performance of specialized hardware is required (such as SSL offload or network processing), the hardware components should support the network overlay technology used within the hybrid data center (such as NVGRE or VXLAN).

High-performance services fabric: A keystone architectural layer in a software-defined data center.

Whether looking to reduce operational or capital expenditures, optimize for efficiency, or enable faster software and infrastructure deployments, organizations can undoubtedly gain business advantages by adopting the F5 High-Performance Services Fabric model. By moving from a legacy design of isolated HA pairs to an all-active multi-tenant fabric, application services can be delivered more efficiently and flexibly.

As a result, organizations can purchase less application delivery capacity, grow it more linearly, and deliver the right services to their applications wherever they are hosted. The operational effort required to create new services is dramatically reduced through integration into orchestration ecosystems that enable the adoption of DevOps methodologies, improved time to value, and reduced costs.

1Source: IDC Worldwide Application Delivery Controller 2013–2017 Forecast

2Gartner, Inc. Market Share: Enterprise Network Equipment by Market Segment, Worldwide, 2Q14, Christian Canales, Erica Gadjuli, Joe Skorupa, and Naresh Singh, October 2014

3Cloud Computing Trends: 2014 State of the Cloud Survey. RightScale April 2014 (www.rightscale.com/blog/cloud-industry-insights/cloud-computing-trends-2014-state-cloud-survey)