F5 and NVIDIA collaborate to create accelerated infrastructure solutions, enabling organizations to effectively and securely deliver AI applications at cloud-scale.

F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs

BIG-IP Next for Kubernetes delivers high-performance traffic management and security for large-scale AI infrastructure, unlocking greater efficiency, control, and performance for AI applications.

BIG-IP Next for Kubernetes runs natively on NVIDIA BlueField-3 DPUs. This provides enterprises and service providers with a single control point to maximize AI infrastructure usage and accelerate AI traffic, for data ingestion, model training, inference, and retrieval-augmented generation (RAG).

Maximize Efficiency and Lower Costs

Maximize AI infrastructure investment and achieve lower TCO while by providing high-performance traffic management and load balancing for large-scale AI infrastructure.

Multi-Tenancy Support for AI Cloud Providers

Enable secure multi-tenancy and network isolation for AI applications, allowing multiple tenants and workloads to efficiently share a single AI infrastructure—even down to the server level.

DPU-Driven Zero Trust Security

Integrate critical security features and zero trust architecture, including edge firewall, DDoS mitigation, API protection, intrusion prevention, encryption, and certificate management, while offloading, accelerating, and isolating these onto the DPU.

Deploying GPUs at Scale

Maximize Your Investment with F5 and NVIDIA

Performance, efficiency, and security are central to the success of organizations deploying large-scale GPU clusters in their AI factories. BIG-IP Next for Kubernetes leverages the NVIDIA BlueField-3 DPU platforms, releasing valuable CPU cycles for revenue-generating applications. BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs (B3220 and B3240 versions) optimizes data movement and improves GPU utilization while optimizing energy consumption.

Kunal Anand talks F5 and NVIDIA

F5 Chief Technology and AI Officer, Kunal Anand, answers questions and discusses F5 and NVIDIA’s collaboration and announcement of BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs

A Single Point of Control

Maximize infrastructure potential with BIG-IP Next deployed on NVIDIA BlueField-3 DPUs

AI applications demand accelerated networking capabilities. BIG-IP Next for Kubernetes optimizes traffic flows to AI clusters, resulting in more efficient use of GPU resources by interfacing directly with front-end networks. For multi-billion-parameter AI models, BIG-IP Next for Kubernetes deployed on BlueField-3 DPUs reduces latency and provides high-performance load balancing for data ingest and incoming queries.

Multi-tenancy architecture future-proofs AI factories for ever-increasing AI workloads

BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs enables organizations to securely support more users on shared computing clusters while also scaling AI training and inference workloads. Connect AI models with data in disparate locations while significantly enhancing visibility into app performance, by utilizing advanced Kubernetes capabilities for AI workload automation and centralized policy controls.

Secure and streamline your AI deployments

The rapid growth of APIs for AI models introduces significant security challenges. BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs automates the discovery and protection of endpoints, securing AI apps against evolving threats. By leveraging zero trust architecture and shifting network and security processing from CPUs to DPUs, BIG-IP Next for Kubernetes delivers fine-grained protection and ensures robust data encryption. This approach not only enhances cyber defenses but also optimizes AI data management, resulting in more secure, scalable, and efficient infrastructure for service providers and enterprises.

Get an integrated view of networking, traffic management, and security

BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs meets the growing demands of AI workloads and is purposefully built for Kubernetes environments. Enhancing the efficiency of north-south traffic flows, organizations receive gain an integrated view of networking, traffic management, and security for AI use cases like inferencing and RAG.

Resources

Schedule a Meeting with F5

Deploying cloud-native apps at scale? Find out how F5 and NVIDIA can enable you to achieve greater efficiency, performance, and security for AI and other modern apps.

Upon submission, an F5 business development representative will be in contact to schedule.

Thank you for submitted a request to meet with F5! An F5 Business Development Representative will be in contact shortly to schedule a meeting.