BLOG

Safeguarding AI systems: Why security must catch up to innovation

F5 Newsroom Staff Thumbnail
F5 Newsroom Staff
Published August 04, 2025

Artificial intelligence is potentially life-changing—and already has been in profound ways. From accelerating breakthroughs in medicine and education to reshaping work and everyday life, AI is transforming how we live and operate. But alongside these advances, AI presents powerful opportunities for cybercriminals.

Today, AI systems are actively targeted by adversaries who exploit vulnerabilities through data poisoning, manipulated outputs, unauthorized model theft via distillation, and exposed private data. These aren’t speculative risks; they’re real, rapidly evolving, and potentially devastating financially. Models are also being used to propagate massive improvements in email attacks and SMS / voice fraud, and deepfakes are increasingly difficult to detect, with several generating multi-million dollars in losses.

According to the 2025 Stanford AI Index Report, the number of AI-related security incidents surged by 56.4% in 2024, reaching 233 reported cases. These weren’t mere glitches or technical hiccups. They involved serious compromises, from privacy violations and misinformation amplification to algorithm manipulation and breakdowns that put sensitive decisions at risk.

But as always, on of our favorite stats is dwell time, or the time between breach and detection. IBM’s Q1 2025 report revealed that AI-specific compromises take an average of 290 days to detect and contain—far longer than the 207-day average for traditional data breaches. That’s nearly 10 months of exposure, leaving these AI-augmented attackers ample time to cause serious harm.

Why most enterprises aren’t ready

To continue reaping the benefits of AI, organizations must treat its security with the same urgency they bring to networks, databases, and applications. But the current imbalance between adoption and protection suggests a different story.

F5’s 2025 State of AI Application Report underscores this point. Only 2% of organizations surveyed were considered highly secure and ready to scale AI safely. Meanwhile, 77% faced serious challenges related to AI security and governance.

The report also revealed that only a fraction of moderately prepared companies have deployed foundational safeguards. Just 18% had implemented AI firewalls, and only 24% practiced continuous data labeling, a key method for detecting adversarial behavior. Compounding the issue is the growing use of Shadow AI: unauthorized or unsanctioned AI tools that create dangerous visibility gaps in enterprise environments.

In the race to deploy AI for competitive gain, many organizations are inadvertently expanding their attack surface.

What makes AI vulnerable

AI’s unique characteristics expose it to novel forms of attack. Some of the most pressing vulnerabilities include:

  • Data poisoning: Attackers subtly inject corrupt or misleading data into training sets, compromising the behavior of AI models. In 2024, University of Texas researchers demonstrated how malicious content embedded in referenced documents could influence model outputs—persisting even after the documents were removed.
  • Model inversion and extraction: These attacks allow adversaries to reconstruct sensitive training data or replicate proprietary models. Real-world cases include the recovery of patient images from diagnostic systems and the reconstruction of private voice recordings and internal text from language models.
  • Evasion attacks: By making minute, often imperceptible changes to input data, attackers can trick AI models into producing incorrect outputs. One example: researchers fooled an autonomous vehicle’s vision system into misclassifying a stop sign as a speed limit sign by adding innocuous-looking stickers.
  • Prompt injection: Large language models (LLMs) are susceptible to carefully crafted input that manipulates their behavior. In one case, a ChatGPT-powered chatbot used by Chevrolet dealerships was tricked into agreeing to sell a car for $1—an outcome that exposed both reputational and legal risks.

These threats are not theoretical. They are active, and they are already undermining AI’s safety and reliability across industries.

Building a strong AI defense strategy

To meet these challenges, organizations must adopt a well-rounded defense strategy that addresses both general cybersecurity and AI-specific risks. The following five steps can help enterprises secure their AI systems:

  1. Strengthen data governance
    Build a clear inventory of AI assets—including models, APIs, and training datasets—and enforce tight access control policies. Data is foundational to AI, and its integrity must be protected at every level.
  2. Test continuously
    Move beyond traditional code reviews and implement adversarial testing and red teaming. These methods help uncover weaknesses such as model inversion and prompt injection before attackers exploit them.
  3. Embrace privacy-first design
    Incorporate encryption, data minimization, and differential privacy techniques. These approaches limit the risk of sensitive data exposure, even if a breach occurs.
  4. Adopt zero trust architecture
    Apply a “never trust, always verify” philosophy across all AI systems. Grant the minimum access necessary to every component and user and rigorously verify all activity.
  5. Monitor AI behavior in real time
    Implement tools and systems that watch for anomalies in model behavior or input patterns. Monitor for things like excessive API calls, suspicious prompts, or abnormal outputs—all of which could signal active threats.

F5’s approach to AI security

As more organizations embrace hybrid and multicloud environments, F5 through our  Application Delivery and Security Platform (ADSP) is delivering AI-native security capabilities designed to protect modern infrastructure.

Part of this platform, F5 AI Gateway provides defense against prompt injection and data leakage by intelligently inspecting and routing LLM requests. Advanced API security solutions—available via F5 Distributed Cloud API Security and NGINX App Protect—safeguard APIs from misuse, exfiltration, and abuse.

Also a part of F5 ADSP, F5 Distributed Cloud Bot Defense uses machine learning to detect and block automated threats like credential stuffing with minimal false positives. And F5 BIG-IP Advanced WAF solutions secure applications and APIs while offloading security tasks from GPUs, improving performance in AI-intensive workloads.

In addition, F5’s AI Reference Architecture offers a blueprint for secure, reliable AI infrastructure across hybrid and multicloud environments. F5 also collaborates with leading AI innovators, including Intel, Red Hat, MinIO, and Google Cloud Platform, among many others, to help customers scale securely and efficiently.

Final thoughts: Secure AI, secure future

AI is transforming every industry it touches—but its potential comes with unprecedented risks. As threats grow more sophisticated, security leaders must move with urgency and foresight, embracing proactive tools, smarter architecture, and policy-driven protection.

AI security must be integrated into the very fabric of enterprise strategy. With the right combination of regulation, technology, and culture—anchored by proven frameworks like the U.S. National Institute of Standard and Technology’s AI Risk Management Framework and supported by platforms such as F5 ADSP—organizations can harness the full promise of AI while defending against its darker edge.

The AI frontier has arrived. The time to secure it is now.

Come to the panel discussion at Black Hat

If you’re planning to be in Las Vegas this week for Black Hat USA 2025, please join F5 Field CISO Chuck Herrin and other experts for a panel discussion during AI Summit as they discuss how to ramp up digital defenses in the AI age

Also, be sure to visit our webpage to learn more about F5’s enterprise AI delivery and security solutions.