NGINX and NGINX Plus provide a number of features that enable it to handle most SSL/TLS requirements. They use OpenSSL and the power of standard processor chips to provide cost‑effective SSL/TLS performance. As the power of standard processor chips continues to increase and as chip vendors add cryptographic acceleration support, the cost advantage of using standard processor chips over specialized SSL/TLS chips also continues to widen.
There are three major use cases for NGINX and NGINX Plus with SSL/TLS.
When NGINX is used as a proxy, it can offload the SSL decryption processing from backend servers. There are a number of advantages of doing decryption at the proxy:
For more details, see NGINX SSL Termination in the NGINX Plus Admin Guide.
There are times you might need NGINX to encrypt traffic that it sends to backend servers. These requests can arrive at the NGINX server as plain text or as encrypted traffic that NGINX must decrypt in order to make a routing decision. Using a pool of keepalive connections to the backend servers minimizes the number of SSL/TLS handshakes and thus maximizes SSL/TLS performance. This is achieved very simply by configuring NGINX to proxy to “https” so that it automatically encrypts traffic that is not already encrypted.
Because NGINX can do both decryption and encryption, you can achieve end‑to‑end encryption of all requests with NGINX still making Layer 7 routing decisions. In this case the clients communicate with NGINX over HTTPS, and it decrypts the requests and then re‑encrypts them before sending them to the backend servers. This can be desirable when the proxy server is not collocated in a data center with the backend servers. As more and more servers are being moved to the cloud, it is becoming more necessary to use HTTPS between a proxy and backend servers.
NGINX can handle SSL/TLS client certificates and can be configured to make them optional or required. Client certificates are a way of restricting access to your systems to only pre‑approved clients without requiring a password, and you can control the certificates by adding revoked certificates to a certificate revocation list (CRL), which NGINX checks to determine whether a client certificate is still valid.
There are number of other features that help support these use cases, including (but not limited to) the following:
For a more details, check out these resources:
Here are a few examples of NGINX’s security features. These examples assume a basic understanding of NGINX configuration.
The following configuration handles HTTP traffic for www.example.com and proxies it to an upstream group:
upstream backends {
server 192.168.100.100:80;
server 192.168.100.101:80;
}
server {
listen 80;
server_name www.example.com;
location / {
proxy_pass http://backends;
}
}
Now add HTTPS support, so that NGINX decrypts the traffic using the certificate and private key and communicates with the backend servers over HTTP:
upstream backends {
server 192.168.100.100:80;
server 192.168.100.101:80;
}
server {
listen 80;
listen 443 ssl; # 'ssl' parameter tells NGINX to decrypt the traffic
server_name www.example.com;
ssl_certificate www.example.com.crt; # The certificate file
ssl_certificate_key www.example.com.key; # The private key file
location / {
proxy_pass http://backends;
}
}
Or if you instead receive traffic over HTTP and send it to the backend servers over HTTPS:
upstream backends {
server 192.168.100.100:443;
server 192.168.100.101:443;
}
server {
listen 80;
server_name www.example.com;
location / {
proxy_pass https://backends; # 'https' prefix tells NGINX to encrypt the traffic
}
}
To try NGINX Plus, start your free 30-day trial today or contact us to discuss your use cases.
"This blog post may reference products that are no longer available and/or no longer supported. For the most current information about available F5 NGINX products and solutions, explore our NGINX product family. NGINX is now part of F5. All previous NGINX.com links will redirect to similar NGINX content on F5.com."