The sophistication and number of cybersecurity attacks is growing exponentially, creating significant risk of exposure for your apps deployed in on‑premises, hybrid, and multi‑cloud Kubernetes environments. Traditional security models are perimeter‑based, assuming that users are trustworthy (and the communication among them secure) if they’re located within the environment’s secured boundaries. In today’s distributed environments, the concept of a safe zone inside the perimeter no longer exists – communications originating from “inside” the environment can be just as dangerous as external threats.
In this blog, we explore the benefits of adopting a Zero Trust model to secure your Kubernetes infrastructure and how NGINX can help improve your security posture.
Zero Trust is a security model based on identity rather than location. It assumes that any request for access to applications, data, and devices might be an attack, whether the requester seems to be located on premises, remotely, or in the cloud.
To implement Zero Trust’s three core principles – never trust, always verify, continuously monitor – every user, service, application, and device is continuously required to present proof of authentication and authorization. Time‑bound privileges are granted based on dynamic access policies and on a least‑privilege basis.
All communications are encrypted and routed through a policy decision/enforcement point (PDP/PEP) that authenticates all parties and grants them privileges based on dynamic access policies. In addition, auditing, monitoring, reporting, and automation capabilities are in place to analyze, evaluate, and mitigate security risks.
Zero Trust improves your security posture in several ways:
Zero Trust is especially critical for modern, cloud‑native apps running in a Kubernetes environment. Loosely coupled and portable, distributed apps and services are containerized and run across hybrid, multi‑cloud environments where location‑based security is not an option. Security necessarily depends on continuous validation of identities and privileges, end-to-end encryption, and monitoring.
To fulfill Zero Trust principles, your Kubernetes environment must provide critical security capabilities such as authentication, authorization, access control, policies, encryption, monitoring, and auditing for users, applications, and services.
One possible way to achieve that is to build security into the app itself. However, that means your developers must implement multiple security procedures – for establishing and verifying trust, managing user identities and certificates, encrypting and decrypting all communication, and so on. They must also understand and integrate third‑party technologies like TLS and single sign‑on (SSO). All this not only adds complexity to your already complex Kubernetes deployment. It distracts developers from what they need (and want!) to concentrate on: optimizing the app’s business functionality.
Don’t panic – there’s a better way: offload security and other non‑functional requirements to your Kubernetes infrastructure! Connectivity tools for Kubernetes clusters, such as Ingress controllers and service meshes, can deliver PDP and PEP for all communication among your apps and services – whether originated by users or other apps or services. That means you can focus on core business expertise and functionality, while delivering apps faster and easier.
As the following diagram illustrates, the NGINX solution for secure Kubernetes connectivity includes all the infrastructure‑agnostic components and tools you need to successfully protect your users, distributed applications, microservices, and APIs at scale and end-to-end across any environment – on‑premises, hybrid, and multi‑cloud. Powered by the most popular data plane in the world, the solution combines:
The NGINX solution enables you to:
As organizations scale, it becomes critical to offload requirements that aren’t specific to an app’s functionality – such as Zero Trust security features – from the application layer. We explained above how this frees developers from the burden of building, maintaining, and replicating security logic across their apps; instead they can easily leverage security technologies at the platform level. NGINX offers centralized security policy enforcement for Kubernetes at the edge of the cluster with NGINX Ingress Controller and within the cluster with NGINX Service Mesh. You can add advanced application protection from sophisticated cyberattacks with NGINX App Protect WAF and DoS deployed at the edge or within the cluster, depending on your app security requirements.
Let’s explore in depth how the NGINX solution includes the features you need to implement comprehensive Zero Trust security for your Kubernetes deployments.
One of the key principles of Zero Trust security is that every device, user, service, and request is authenticated and authorized. Authentication is the process of verifying identity – in other words, ensuring that each party participating in a communication is what it claims to be. Authorization is the process of verifying that a party is entitled to the access it is requesting to a resource or function.
To address this principle, the NGINX solution provides several options for implementing authentication and authorization, including HTTP Basic authentication, JSON Web Tokens (JWTs), and OpenID Connect through integrations with identity providers such as Okta and Azure Active Directory (AD). The NGINX solution also issues secure identities to services (much like application users are issued identification in the form of certificates), which enables them to be authenticated and authorized to perform actions across the Kubernetes cluster. In addition to handling workload identities, the NGINX solution automates certificate management with built‑in integrations with Public Key Infrastructure (PKI) and certificate authorities.
Because NGINX Ingress Controller is already scrutinizing all requests entering the cluster and routing them to the appropriate services, it’s the most efficient location for centralized user authentication and authorization, as well as for service authentication in some scenarios.
For more details, read Implementing OpenID Connect Authentication for Kubernetes with Okta and NGINX Ingress Controller on our blog.
Another Zero Trust principle is that all communication must be secured – its confidentiality and integrity maintained – no matter where the participants are located. Data must not be read by unauthorized parties or modified during transmission. To satisfy this principle, the NGINX solution uses SSL/TLS encryption for user-to-service communications and mutual TLS (mTLS) authentication and encryption for service-to-service communications.
If your app architecture doesn’t involve service-to-service communication within the Kubernetes cluster, NGINX Ingress Controller may be sufficient to meet your data integrity needs. There are two basic options:
If your architecture involves service-to-service communication within the cluster, for data integrity you need both NGINX Ingress Controller and NGINX Service Mesh. NGINX Service Mesh ensures that only specific services are allowed to talk to each other and uses mTLS to authenticate them and encrypt communications between them. You can implement mTLS in a “zero touch” manner with NGINX Service Mesh, meaning developers do not have to retrofit their applications with certificates or even know that mutual authentication is taking place.
For more on securing communication in a Kubernetes cluster, read The mTLS Architecture in NGINX Service Mesh on our blog.
Access control is another critical element in the Zero Trust model. Kubernetes uses role‑based access control (RBAC) to regulate which resources and operations are available to different users. It determines how users, or groups of users, can interact with any Kubernetes object or namespace in the cluster.
The NGINX Kubernetes connectivity solution is RBAC‑enabled for easy alignment with your organization’s security policies. With RBAC in place, users get gated access to the functionality they need to do their jobs without filing an IT ticket and waiting around for it to be fulfilled. Without RBAC, users can gain permissions they don’t need or aren’t entitled to, which can lead to vulnerabilities if the permissions are misused.
When you configure RBAC with NGINX Ingress Controller, you can control access for numerous people and teams by aligning permissions with the various roles in your organization’s application development and delivery environment. Its fine‑grained access management tool enables self‑service and governance across multiple teams.
To learn how to leverage RBAC with NGINX Ingress Controller, watch our webinar on DevNetwork, Advanced Kubernetes Deployments with NGINX Ingress Controller. Starting at 13:50, our experts explain how to leverage RBAC and resource allocation for security, selfservice, and multitenancy.
Auditing, monitoring, logging, tracing, and reporting are key elements in a Zero Trust environment. The more information you can collect about the state of your Kubernetes application infrastructure and the more effectively you can correlate, analyze, and evaluate it, the more you can strengthen your security posture.
You’re probably already using monitoring tools in your Kubernetes deployment and don’t need yet another one. To give you a full picture of what’s going on inside your clusters, we’ve instrumented the NGINX Plus API for easy export of metrics to any third‑party tool that accepts JSON and provide prebuilt integrations with popular tools like OpenTelemetry and Grafana and Prometheus. You get targeted insights into app connectivity with deep traces so you can understand how requests are processed end-to-end: NGINX Ingress Controller provides insight into connectivity between your cluster and external clients, while NGINX Service Mesh covers connectivity among the containerized, microservices‑based apps and services within the cluster.
With NGINX App Protect, you can further strengthen the security of your distributed applications by protecting them from threats like the OWASP Top 10 and Layer 7 denial-of-service (DoS) attacks. NGINX App Protect, an integral component of the end-to-end NGINX secure connectivity solution, provides agile, app‑centric security from the most advanced threats – well beyond basic signatures. It leverages F5’s leading and trusted security expertise and doesn’t compromise release velocity and performance. It can easily forward security telemetry to third‑party analytics and visibility solutions, and it reduces false positives with high‑confidence signatures and automated behavior analysis.
NGINX App Protect’s modular design means you can deploy one or both of WAF and DoS protection on the same or different instances, depending on your needs. For example, you might decide to deploy them with NGINX Ingress Controller at the edge of your cluster, which is ideal for providing fine‑grained protection that’s consistent across an entire single cluster. If instead you need app‑specific policies for multiple apps in a cluster, you can deploy WAF and/or DoS protection at the service or pod level.
For more information about deploying WAF and DoS protection, read Shifting Security Tools Left for Safer Apps on our blog.
Whether you are at the beginning of your Kubernetes journey or an advanced user who has run Kubernetes in production for a while, NGINX offers a comprehensive set of tools and building blocks to meet your needs and improve your security posture.
Get started by requesting your free 30-day trial of NGINX Ingress Controller with NGINX App Protect WAF and DoS, and download the always‑free NGINX Service Mesh.
"This blog post may reference products that are no longer available and/or no longer supported. For the most current information about available F5 NGINX products and solutions, explore our NGINX product family. NGINX is now part of F5. All previous NGINX.com links will redirect to similar NGINX content on F5.com."