Looking Back
How times have changed! Looking back at our 5 Cybersecurity Predictions for 2023 there was not a single mention of AI. And yet, here we are wrapping up 2023 (pun very much intended) and it seems almost impossible to read a headline in which AI is not somehow involved. While we may have missed this tsunami-sized sea change (and fear not, dear reader, we aim to rectify that this time around), we did score pretty well in the rest of our predictions if we do say so ourselves. Let’s look at a few of our predictions from last year to see how we fared.
Shadow APIs Will Lead to Unforeseen Breaches: Over the course of 2023, API related security breaches rose by over a third compared with the previous year. More than 120 million records were disclosed across 22 breaches with the majority being caused by broken authentication or authorization.1 However, 27% were a result of security misconfiguration, an (almost) fourfold increase compared with 2022. Shadow APIs are often deployed by developers who fail to follow standards and policies already defined by their organization. This certainly falls into the category of ‘security misconfiguration’ and so I think we’re safe to chalk this prediction up as a win.
Multi-Factor Authentication Will Become Ineffective: Last year we also claimed that multi-factor authentication (MFA) would become all but useless. This is one prediction we genuinely take no pleasure in being right about. In our recent 2023 Identity Threat report, we note that with the ubiquitous deployment of MFA it was bound to become the focus of attackers. From real-time phishing proxies to SIM swapping, there are an almost limitless number of ways attackers can trick an individual into giving up their 6-digit code. It’s hard to not still advocate for the use of MFA, but it is now crucial for application owners and businesses to understand the ease with which they are bypassed and look to new cryptographically based authentication schemes to combat fraud.
Troubles with Troubleshooting: Those accusing us of playing it safe by predicting cloud misconfiguration incidents, may have some solid basis for their argument. Nevertheless, we claim another small victory here. The reason we were keen to highlight cloud misconfiguration, despite it being an easy win, is due to the impact these events have. Too often cloud misconfiguration results not in one system being accessed, or a few gigabytes of data stolen, but in an entire organization being exposed with terabytes of data being leaked. In one of 2023’s later examples of this, researchers discovered that Microsoft had accidentally exposed 38 terabytes of data, private keys, and 30,000 private Teams messages, by simple misconfiguration of Azure storage.2
Open-Source Software Libraries Will Become the Primary Target: Compared with 2022, the past twelve months might lead you to think that software supply chain attacks had fallen out of favor with threat groups. At least if mainstream news headlines are anything to go by. Sadly, this is not the case. Early in 2023, researchers discovered what was believed to be the first attack to specifically target open-source software (OSS) used by the banking and finance sector. Since then, additional OSS attacks have been spotted which also focused on this industry.3 In addition, a software supply chain report recently claimed that a staggering 1 in 8 open-source downloads have a known risk.4 Even more impressive is that there were 245,000 malicious packages discovered in 2023 which is twice that of all previous years combined.
Ransomware Will Expand on the Geopolitical Stage: Finally, when it comes to the presence (presents5) of ransomware on the political stage, we won’t award ourselves too many points. This prediction was somewhat inevitable as the impact of ransomware attacks continues to be felt around the globe. The threat is so high that a recent UK parliamentary committee claims that the country is totally unprepared and at risk of a catastrophic ransomware attack. Across the pond, U.S. President Biden’s strategy, moving ransomware away from law enforcement’s scope and making it a matter of national security, appears to be having some impact. At the start of 2023 the FBI helped to take down the $100-million ransomware group known as Hive.6 The question remains as to whether enough is being done internationally to have a measurable and long-lasting impact on ransomware attacks.
So, buoyed by our success, we’re taking another stab at predicting the murky future of cybersecurity and this time we’re doubling our recommendations! With input from security operations engineers and threat intelligence analysts across F5, here is our take on what we’re likely to see in the coming months.
Looking Forward
AI Will Advance Attacker Capabilities
On the surface this doesn’t sound like much of a prediction, since security people everywhere have been predicting the use of large language models (LLMs) to write phishing emails since ChatGPT was first released to the public. Indeed, the more perspicacious among us realized that that is the just the start, and there will be myriad ways that generative AI will act as a force multiplier for threats. Still, an unspecified threat is an uncontrolled threat, so our prognosticators have identified a handful of specific ways that LLMs can be brought to bear by attackers.
Prediction #1: Generative AI Will Converse with Phishing Victims
In April 2023 Bruce Schneier pointed out that the real bottleneck in phishing isn’t the initial click of the malicious link but the cash out, and that often takes a lot more interaction with the victim than we might assume.7 Tafara Muwandi, head of F5’s Security Intelligence Center and magisterial source on all topics fraud-related, takes Bruce’s prediction one step further and bets on LLMs to take over the back-and-forth between phisher and victim:
“Organized crime gangs will benefit from no longer needing to employ individuals whose entire job was to translate messages from victim and act as a “support center”. Generative AI will be used to translate messages from the non-native language the attackers use, and respond with authentic sounding responses, coaching the victim along the social engineering path.”
By incorporating publicly available personal information to create incredibly lifelike scams, organized cybercrime groups will take the phishing-as-a-service we already know and magnify it both in scale and efficiency.
Prediction #2: Organized Crime Will Use Generative AI with Fake Accounts
In a related, though subtly different prediction, Muwandi also believes that organized cybercrime will create entirely fake online personas. Generative AI will be used to create fake accounts containing posts and images that are indiscernible from real human content. All of the attack strategies that fake accounts engender, including fraud, credential stuffing, disinformation, and marketplace manipulation, will see an enormous boost in productivity when it costs zero effort to match human realism.
Prediction #3: Nation-States Will Use Generative AI for Disinformation
Generative AI tools have the potential to significantly change the way malicious information operations are conducted. The combination of fake content creation, automated text generation for disinformation, targeted misinformation campaigns, and circumvention of content moderation constitutes a leap forward for malicious influence.
Remi Cohen, cyber threat intelligence manager from F5’s Office of the CISO, has this to say:
“We have already observed genAI-created content being used on a small scale in current conflicts around the world. Reports indicate AI generated images have been spread by state and non-state actors to garner support for their side. At a larger scale, I expect to see this used by different actors ahead of major world events which in 2024 including the U.S. Presidential election and the Olympics in Paris.”
Concerns such as these led to Adobe, Microsoft, the BBC, and others creating the C2PA standard, a technique to cryptographically watermark the origin of digital media.8 Time will tell whether this will have any measurable impact with the general public.
Prediction #4: Advances in Generative AI Will Let Hacktivism Grow
Hacktivist activity related to major world events is expected to grow as computing power continues to become more affordable and, crucially, easier to use. Through the use of AI tools and the power of their smartphones and laptops, it is likely that more unsophisticated actors will join the fight in cyber space as hacktivists.
Samantha Borer, cyber threat intelligence analyst in F5’s Office of the CISO, had these insights to share:
“Over the past couple of years, the world observed a resurgence in the volume of hacktivist activity, starting with threat actors expressing support for both sides with the Russian invasion of Ukraine. Only a small amount of hacktivist activity was initially seen in more recent conflicts, but as violence increases on the physical battlefield, so too have hacktivists moved to progressively more destructive attacks. Intelligence reports have observed availability attacks which include distributed denial-of-service attacks, data leaks, website defacements, and a clear focus on attempting to disrupt critical infrastructure.”
With world events like the Olympics, elections, and ongoing wars taking place in 2024, hacktivists are likely to use these opportunities to gain notoriety for their group and sympathy for the causes they support. Attendees, sponsors, and other loosely affiliated organizations are likely to become targets, if not victims of these geopolitically motivated hacktivists. This is likely to extend beyond just targeting individuals but also to targeting companies and organizations that support different causes.
Prediction #5: Web Attacks Will Use Real-Time Input from Generative AI
The ability of generative AI to create digital content, be it a phishing email or fake profile, has been well understood for some time. Its use in attacks can therefore be considered passive. However, David Warburton, Director of F5 Labs, points out that with their impressive ability to create code LLMs can, and will, be used to direct the sequences of procedures during live attacks, allowing attackers to react to defenses as they encounter them.
“By leveraging APIs from open genAI systems such as ChatGPT, or by building their own LLMs, attackers will be able to incorporate the knowledge and ideas of an AI system during a live attack on a website or network. Should an attacker’s website attack find itself blocked due to security controls, an AI system can be used to evaluate the response and suggest alternative ways to attack.”
Look for LLMs to diversify attack chains to our detriment soon.
AI Will Introduce New Vulnerabilities
This is a concept that others have already explored in general, but it bears considering the various subtleties of how AI will introduce new vulnerabilities.
Prediction #6: LLLMs (Leaky Large Language Models)
For Malcolm Heath, Senior Threat Researcher at F5 Labs, it is specifically the enormous potential for opaque automation that complicates the task of security, privacy, and governance/compliance teams to perform their roles.
Fresh research has shown disturbingly simple ways in which LLMs can be tricked into revealing their training data, which often includes proprietary and personal data.9 Heath predicts that the rush to create proprietary LLMs will result in many more examples of training data being exposed, if not through novel attacks, then by rushed and misconfigured security controls:
“I expect to see some spectacular failures of GenAI-driven tools—such as massive leaks of PII, novel techniques to gain unauthorized access, and denial of service attacks.”
As with cloud breaches, the impact of LLM leaks has the potential to be enormous because of the sheer quantity of data involved.
Prediction #7: Generative Vulnerabilities
Many developers, seasoned and newbie alike, increasingly look to generative AI to write code or check for bugs. But without the correct safeguards in place, many foresee LLMs creating a deluge of vulnerable code which is difficult to secure. Whilst OSS poses a risk, its benefit lies in its inherent fix-once approach—should a vulnerability be discovered in an OSS library, it can be fixed once and then used by everyone who uses that library. With GenAI code generation, every developer will end up with a unique and bespoke piece of code. Per Jim Downey, Cybersecurity Evangelist:
“Code assistants write code so quickly that developers may not have time to review. Depending on when the LLM was built, it may not even be aware of the latest vulnerabilities, making it impossible for the model to construct code that avoids these vulnerabilities or avoids importing libraries with vulnerabilities.”
In the age of generative AI, organizations that prioritize speed over security will inevitably introduce new vulnerabilities.
Architectural Complexity Will Complicate Security
Several of us foresee greater challenges in security due to various shifts in system and security architectures.
Prediction #8: Attacks on the Edge
For Shahn Backer, senior solutions architect, it is the rise of edge computing that will drive a dramatic expansion in attack surface. He notes that physical tampering, management challenges, and software and API vulnerabilities are all risks that are exacerbated in an edge context, which is why Shahn predicts that edge compute will emerge as a leading attack surface.
“Seventy five percent of enterprise data will be generated and processed outside the traditional confines of data centers or the cloud. This paradigm redefines organizational boundaries since workloads at the edge may harbor sensitive information and privileges.”
Just as with MFA, attackers will focus on areas where their time has the biggest impact. If the shift to edge computing is handled as carelessly as cloud computing can be, expect to see a similar number of high-profile incidents over the coming year.
Prediction #9: Attackers Will Improve Their Ability to Live Off the Land
There is another risk of growing architectural complexity: more opportunities for attackers to use our tools against us. Kieron Shepard, security solutions architect, foresees that the growing complexity of IT environments, particularly in cloud and hybrid architectures, will make it more challenging to monitor and detect living-off-the-land (LOTL) attacks.
“Attackers are increasingly turning to LOTL techniques that use legitimate management software already present on victim systems to achieve their malicious objectives. To make things worse, LOTL attacks can be incorporated into supply chain attacks to compromise critical infrastructure and disrupt operations.”
Unless we improve our visibility in our own networks, we can expect to see attackers use our own tools against us with increasing frequency.
Prediction #10: Cybersecurity Poverty Line Will Become Poverty Matrix
For his part, senior threat researcher Sander Vinberg is concerned about the effect that trends in security architecture will have on the security poverty line, a concept advanced more than a decade ago by the esteemed Wendy Nather. The security poverty line is defined as the level of knowledge, authority, and most of all budget necessary to accomplish the bare minimum of security controls, and Vinberg sees the cost and complexity of current security offerings forcing organizations to choose between entire families of controls.
Today it seems that organizations need security orchestration, automation, and incident response (SOAR), security information and event management (SIEM), vulnerability management tools, and threat intelligence services, as well as programs like configuration management, incident response, penetration testing, and governance, compliance, and risk. Vinberg explains:
“The key issue here is that many enterprise organizations choose to consume these controls as managed services, such that the expertise is guaranteed but so is the cost. The heightened cost of entry into each of these niches means that they will increasingly become all-or-nothing, and more organizations will eventually need to choose between them.”
In other words, the idea of a simple poverty line no longer captures the tradeoff that exists today between focused capability in one niche and covering all of the bases. Instead of a poverty line we will have a poverty matrix composed of n-dimensions, where n is the number of niches, and even well-resourced enterprises will struggle to put it all together.
Conclusion
As we peer into the future of cybersecurity, these predictions underscore the need for continuous adaptation and innovation in defending against evolving cyber threats. Whether it's addressing the socioeconomic disparities in cybersecurity resilience, fortifying edge computing environments, or preparing for seemingly endless AI-driven assaults on our lives, the cybersecurity landscape of 2024 demands a proactive and collaborative approach to safeguard our digital future.