Recently, I read an article about one of the latest ransomware attacks, in this case with healthcare provider Lehigh Valley Health Network (LVHN). Troubling as it always is to see cybercriminals take an organization’s systems and data hostage, this cyberattack was especially concerning. As news outlet The Register describes the incident:
“[cybergang] BlackCat (also known as ALPHV) broke into one of the Lehigh Valley Health Network (LVHN) physician's networks in the USA, stole images of patients undergoing radiation oncology treatment along with other sensitive health records… LVHN refused to pay the ransom, and earlier this month BlackCat started leaking patient info, including images of at least two breast cancer patients, naked from the waist up.”
While every extortion attempt can create a renewed sense of purpose in better securing networks and sensitive data—the fact that ransomware and all forms of cyberattacks may cause lasting harm and psychological trauma to individuals should be a major wake-up call for any organization not already embarking on a zero trust security journey. Any critical infrastructure sector or organization entrusted with sensitive personal information should be integrating the core tenets of a zero trust model.
It probably goes without saying that if we’re not already thinking a few steps ahead of where we are today with app security, we will fall quickly behind in the race with cybercriminals and their emerging threat tactics. So, it’s never too soon to ask ourselves, what does the future hold with zero trust? How will it evolve to address not only a changing threat landscape but also the continually changing nature of our digital world and of apps themselves?
To help answer these questions, I sat down with two security experts here at F5: Distinguished Engineer, Ken Arora; and Senior Director of Product Development, Mudit Tyagi.
This LVHN breach really got me thinking about how organizations are implementing ZT principles. It’s almost like a time of reckoning, where a year of free credit monitoring isn’t going to cut it anymore for consumers impacted by a breach. Would you say the stakes are much higher today for businesses?
Mudit: It’s evident that stakes are higher. Especially with state-sponsored hackers who have the funding and technical resources, as well as the time, to research and launch very targeted attacks against corporations and other nations. Not to mention, upsetting the general state of social stability. We remember the Aurora incident from 2010 when IP was stolen from Google and other US corporations. A decade later in 2020, the SolarWinds supply chain attack was used as a mechanism to get inside many sensitive networks. The potential for ‘high stakes’ impact has led the White House to issue a Presidential Order requiring all government agencies to follow principles of zero trust.
Ken: I agree that the scope of exposure will only increase with time. And that’s not only because more data is going to be available online, but also that whole new classes of compromise will be exposed as we move toward adopting AI at scale. I’ll note that societal governance around managing the risks of any new technology always takes time. For example, it took until a few years ago for companies, especially in regulated sectors, to have legal liability. We’ve seen this with Experian and T-Mobile to the collective tune of over $1B. More recently, we see how individuals such as the CSO of Uber are being held personally accountable for malpractice in cases where significant negligence was present. This example also parallels healthcare. I think these developments make it even more incumbent for security professionals to follow best practices and for the legal system to come up with safe harbor guidelines.
Given today’s landscape, what do you think is on the horizon with zero trust?
Mudit: Historically, Google introduced Beyond Corp after Aurora attacks as one of the early incarnations of zero trust, although the idea is even older. Today, offerings such as ZTNA and SASE have emerged, applying zero trust principles to user and device access. However, more effort must go toward applying principles of zero trust inside the network itself where workloads interact with each other to enable business processes and user experiences. We consider this workload-to-workload zero trust. Simply put, the access-related approaches are trying to stop a breach from happening whereas workload-level zero trust is focused on minimizing the impact of an ongoing breach.
The industry struggles to implement controls that follow the zero trust principle of “assume breach.” To deal with post-breach scenarios, workloads require “identities” just as users and devices do. And those workloads should be continuously monitored to prevent malware that’s already inside the network from doing harm. This is especially critical given today’s mobile workforce and infrastructure environments where apps are spread across private and public clouds plus legacy systems.
Since zero trust requires an “assume breach” mentality, it will require security measures far beyond just access control.
Ken: Exactly. We must also include threat intelligence, security information and event management, continuous diagnostics and mitigation, and so on. The changing nature of apps and ever-increasing AI-driven traffic require that we not only have a holistic approach to zero trust but that we apply these principles to new areas of the infrastructure, like the workloads themselves.
As Mudit pointed out, the principles at the core of zero trust have been known to and internalized by security professions for a long time—and I mean security broadly, not just cybersecurity. The most recent incarnation, ZTNA, is fundamentally about applying the principles of “always verify” and “continuously monitor and assess” to the identity of users and devices that want to access a network.
I think the next logical steps in the evolution of zero trust applied to cybersecurity will be: 1) up-leveling from network access to application access, meaning using application-layer abstractions and protections, not network-layer ones; and 2) extending the application of zero trust principles beyond users and devices to workloads.
Why is the workload so critical? And if we go that deep, do we then also apply zero trust principles to development practices themselves (thinking code repositories and the like)?
Ken: The short answer is: workloads are critical because they’re the new insider threat.
Consider that the majority of code in cloud-native applications is developed outside the control of an application owner’s organization, in an open source environment. The maturity of software practices used to develop that code is generally loosely governed and poorly documented. The testing that is performed for that code, after it is imported, is mostly functional and performance-related with short shrift given to security assessments. As a result, most application code is “implicitly trusted” which is the exact behavior zero trust is trying to dissuade. And that assumes well-intentioned code suppliers. If one further considers that there are active adversaries who are attempting to use open source code as a backdoor, this emerges as a first-class threat surface. By the way, many of the most recent headline-grabbing attacks: log4j, SolarWinds, and 3CX are all examples of supply chain attacks, which are out of scope for both ZTNA and user identity-based security solutions.
Let’s focus on that last point for a moment—how open source code can be used as a backdoor. This goes back to your earlier point about zero trust requiring measures like threat intelligence, security information and event management, continuous diagnostics, and so on.
Mudit: Yes. It’s important to consider how we would think about security if we begin with “assume breach.” The network-centric controls play some role; they can look for comms to command and control. However, the goal is to stop the malware that has made it in from successfully doing harm. When the malware does its work, it must reveal itself. When we watch all workloads and understand their characteristics, we have a much better shot at detecting the nefarious workload related to the malware’s activities.
Researchers at MITRE have done the security world a big favor by creating a taxonomy with which we can understand most attacks in terms of a combination of Tactics, Techniques, and Procedures (TTPs) described in the MITRE ATT&CK Framework. They also describe a set of countermeasures which help to detect the TTPs in the MITRE D3FEND Framework. An important piece of this framework is the process analyses which require applying zero trust principles at that workload level.
It is very important that the code that constitutes the workload is scanned for vulnerabilities by Static Application Security Test (SAST) and Dynamic Application Security Test (DAST) tools during the development cycle. One can also prepare to implement workload-level zero trust by building a baseline of low-level workload characteristics such as ‘system calls’ during the development cycle. With this baseline, it becomes easier to detect and analyze anomalies as the workload runs in production. The D3FEND framework suggests a whole set of countermeasures focused on hardening which should happen during the development cycle.
Let’s touch on the role of AI for security generally and zero trust specifically. What security challenges and opportunities will AI create? Also, ChatGPT has been everywhere in the news. I’ve seen projections that 90% of internet traffic will be AI-generated by 2026. Will this change security? If so, how?
Mudit: Automated attacks are already very difficult to defend against. The attacker can keep morphing and changing the attack, which creates a lot of noise for the SOC. Defenders already use automation to fight these automated attacks. Even as we learn the signatures of the automated attacks, false positives are a big issue. With AI, it will be easier to mimic normal users to a much greater degree than the automated attacks of today. This will make detection of anomalies harder and will require very sophisticated contextual analysis to minimize false positives.
Ken: I think AI will continue to have a huge impact on security. To start with, it clearly has a large impact on how the bad guys execute their attacks and how the defenders will detect and remediate those attacks. In brief, the cat-and-mouse game will persist, with the AI being exponentially faster and more agile proxies for what humans could manually do.
When it comes to ChatGPT specifically, I think the zero trust mantra of not blindly trusting will play here. Just as the premise of ZTNA is “don’t blindly trust users and devices,” and that should be extended to workloads, I think AI content generation will move us to not blindly trust content. In many ways, this is coming around full circle to how society works; how much trust we impart to a specific piece of content depends on where that comes from. For example, I might trust a statement about the integrity of a check that is sourced from my bank more than a similar statement from a random person on the street. In short, data attribution matters. And, as AI generates more content, we’ll see this idea—which has existed in the real world for a long time—emerge as a key consideration in the digital world.
Given the accelerating pace at which new threats are emerging, discussion around the future of zero trust is poised to stay relevant—especially when it comes to minimizing the impact of insider threats and applying zero trust principles to the workload. As more AI-generated material floods the internet, the industry’s approach to zero trust will necessarily evolve, and we’ll feature more perspectives going forward. Thanks for joining us.
____
For those interested in exploring more around “assume breach,” read Ken’s latest article here.