How does SecOps feel about AI? Part 2: Data protection

F5 Research | September 15, 2025

AI generates a lot of feelings. Some believe it is the next in a long line of fads, soon to join the ranks of NFTs and 3D TVs. Others are building bunkers in preparation for malevolent AGI overlords that become self-aware. Amidst all the hyperbole, there is one reality that can be stated with certainty: AI is connected to a lot of data.

There’s a lot of hype around AI that gets talking heads excited, scared, or skeptical, but at F5, we’re interested in how everyday practitioners are feeling about it. To understand the reality of current challenges and concerns, we conducted a comprehensive sentiment analysis of the Internet’s largest community of security professionals, Reddit’s r/cybersecurity. Shawn Wormke’s Part 1 blog on this study, “How does SecOps feel about AI?” summarized the overall findings from this study. All quotes came directly from security practitioner comments between July 2024 and June 2025. Here, we will take a deeper dive into the top AI-related concern of the year: data security.

Data security started as the top AI-related concern in 2025, and January’s DeepSeek attack only accelerated that trend further.
Data security started as the top AI-related concern in 2025, and January’s DeepSeek attack only accelerated that trend further.

Concerns surfaced as sensitive disclosures, shadow AI, and compliance

Many envision an AI threat landscape of bad guys leveraging AI to execute intricate social engineering attacks and unleash hordes of intelligent bots. Those threats are legitimate, but security professionals paint a picture that is significantly more naïve, just as detrimental, and far more widespread. To that end, SecOps concerns around internal AI misuse surfaced 2.3x more frequently than that of malicious abuse.

“In a world of high-powered AI, maybe others have AI anxiety but my only concern is the clients and coworkers using AI to do incredibly stupid things, not adversaries using it to do incredibly smart things.”-Comment on Reddit’s r/cybersecurity

This cuts to the heart of the first issue: sensitive disclosures. As one practitioner framed it succinctly, “Let’s be real, everyone’s using LLMs at work and dropping all kinds of sensitive info into prompts.” As models attain larger context windows and more file types become available for use with retrieval-augmented generation (RAG,) employees have learned that the quickest path to an informed output is giving the LLM all the information it might need. This stands in direct contradiction to the principle of least privilege, an essential pillar of zero trust. Simply put, “there is always tension between security and capability.”

Traditional strategies for policy enforcement are not working

The logical step most organizations take to secure AI is an acceptable use policy (AUP). There exists a wide range of strategies, but the consensus is that traditional deterrents and restriction methods are insufficient.

“Banning use of any tech seems to backfire, and just lead to use outside your control…just saying ‘you cannot use this’ will not stop people from using it…By the time you’ve discovered the violation and your org’s HR is in a position to dismiss them, they will have already used your org’s data in whatever AI they chose.”-Comment on Reddit’s r/cybersecurity

As one user describes, traditional tools like web application firewalls (WAFs) and DNS filtering merely delay the inevitable: “By blocking them you’re essentially forcing your data into these free services. It will always be a game of whack a mole dealing with blacklisting.” This introduces one of the most discussed challenges of the past year: shadow AI. New models are released daily and wrappers of those models are vibe-coded hourly. Users will always find ways to circumvent policies they see as roadblocks to the successful execution of their positional priorities.

These top two concerns of shadow AI and sensitive data disclosures combine to create a worst-case environment for security teams: rampant exposure with zero visibility. Users might use mainstream LLMs to cut down on reading time, possibly uploading confidential documents in the process. But with shadow AI solutions, the SecOps team could observe those interactions and implement multiple options for risk mitigation. They could pay closer attention to the specific individual for future interactions, or gate critical resources from them until behaviors change. In the absence of a shadow AI solution, traditional countermeasures like firewalls and DNS blocking merely relocate users to obscure wrappers of the same foundational models, obscuring visibility into the form, fashion, and location of risky behaviors.

Compliance is where all exposures culminate

With mounting compliance standards like the EU AI Act and the General Data Protection Regulation (GDPR) layered atop existing industry-specific regulations, organizations without proper AI data governance risk punitive fines, legal liability, and erosion of public trust.

“Tools that lack strong audit logs or don’t let you restrict user data sharing are also red flags. Some just blanket-ban anything that doesn’t meet SOC2 or GDPR compliance. For many, it comes down to risk tolerance: if it can’t guarantee control over sensitive info, it’s out.”-Comment on Reddit’s r/cybersecurity

Security professionals have experienced their share of technologies for which enthusiasm and the desire for competitive parity outpaced security considerations. Cloud computing followed a similar path to where AI is today: rapid adoption and anticipation of exciting new possibilities, followed by widespread misconfigurations, excessive access, and failures of the shared responsibility model. Sound familiar? The primary difference is that the cloud had a significantly smaller pool of parties capable of contributing to the overall risk. The new frontier of AI security expands the immediate focus from cloud architects and engineers as the primary suspects to now anyone with access to sensitive data, including the models themselves.

“There is still a pretty clear gap between the AI companies investing in security and compliance, and the ones that are hoping customers are so excited about the tech they skip those steps.”-Comment on Reddit’s r/cybersecurity

Practitioners understand the assignment

There has never been a technology that did not introduce some level of risk, and never one wherein the world collectively said, “Too risky, let’s all stop immediately.” Security practitioners understand they have an important and challenging road ahead of them.

“Cybersecurity is about risk, not binary statements of secure/insecure. Gen AI presents a lot of risk…does this mean that GenAI isn’t secure to the point it shouldn’t be touched with a ten foot pole? I don’t think so. It’s all about weighing up the benefits and risks to the business in the context of the businesses’ goals and risk appetite.”-Comment on Reddit’s r/cybersecurity

Ensuring that AI interactions with data have effective guardrails and continuous observability is a challenging endeavor, but a necessary requirement if AI adoption continues at its current pace.

F5 is already taking significant action to address these challenges, and we will continue to rely on SecOps voices to steer our priorities. Learn more here.

Share

About the Author

Mark Toler
Mark TolerProduct Marketing Manager

More blogs by Mark Toler

Related Blog Posts

Lessons we are learning from our security incident
F5 Research | 10/22/2025

Lessons we are learning from our security incident

F5 CISO Christopher Burger answers common questions from customers surrounding the recently disclosed security incident.

Inference: The most important piece of AI you’re pretending isn’t there
F5 Research | 09/29/2025

Inference: The most important piece of AI you’re pretending isn’t there

Scaling AI means scaling inference. Learn why inference servers are critical for managing performance, telemetry, and security in production AI workloads.

Dealing with application vulnerabilities: best practices for security testing
F5 Research | 09/17/2025

Dealing with application vulnerabilities: best practices for security testing

Both the scale and complexity of application vulnerabilities are rapidly escalating. Discover why a proactive, multi-layered approach to security testing is critical.

How does SecOps feel about AI? Part 2: Data protection
F5 Research | 09/15/2025

How does SecOps feel about AI? Part 2: Data protection

F5 conducted a comprehensive sentiment analysis of security professionals on Reddit about their thoughts on AI.

IL5/6 won’t save you: Prompt injection threatens read-only LLMs
F5 Research | 09/15/2025

IL5/6 won’t save you: Prompt injection threatens read-only LLMs

As LLMs integrate into U.S. Department of Defense IL5/IL6 environments, discover how F5’s solutions secure data, prevent injection attacks, and enhance zero trust models.

How does SecOps feel about AI?
F5 Research | 09/11/2025

How does SecOps feel about AI?

F5 conducted a comprehensive sentiment analysis of security professionals on Reddit about their thoughts on AI.

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us
How does SecOps feel about AI? Part 2: Data protection | F5