[Editor – The NGINX ModSecurity WAF module for NGINX Plus officially went End-of-Sale as of April 1, 2022 and is transitioning to End-of-Life effective March 31, 2024. For more details, see F5 NGINX ModSecurity WAF Is Transitioning to End-of-Life<.htmla> on our blog.]
We are pleased to announce that NGINX Plus Release 17 (R17) is now available. NGINX Plus is the only all‑in‑one load balancer, content cache, web server, and API gateway. NGINX Plus is based on NGINX Open Source and includes exclusive enhanced features and award‑winning support.
New in this release is support for TLS 1.3, the latest version of the protocol that is responsible for all secure traffic on the Internet. It’s been over 10 years since TLS 1.2 was released, and a lot has changed since. Numerous security vulnerabilities were found in TLS 1.2, such as FREAK, Heartbleed, POODLE, and ROBOT. A lot of these vulnerabilities were a result of too many insecure configuration options in TLS 1.2 that left sites open to attack.
TLS 1.3 is addition by subtraction. Many insecure ciphers have been removed and Diffie‑Hellman key exchange is now mandatory. The result is a slimmed down, faster, and more secure TLS. As of this writing, Alpine Linux 3.9, FreeBSD 12.0, and Ubuntu 18.10 support TLS 1.3, so you can use them with NGINX Plus R17 for TLS 1.3 in your production environment; other OS vendors will doubtless support TLS 1.3 in future releases. Note that F5 BIG-IP and other hardware load balancers do not currently support TLS 1.3 in full.
NGINX Plus R17 also includes these new features:
delay
parameter helps NGINX Plus rate limiting to better accommodate typical browser request patterns. The existing delay and reject enforcement methods can now be combined to provide two‑stage rate limiting, whereby excessive requests are initially delayed and then ultimately rejected if the rate limit is still exceeded.Additional enhancements in NGINX Plus R17 include TCP keepalives to upstreams, SNI support in clustered environments, and more.
Important Changes in Behavior
NGINX Plus R13 introduced the all‑new NGINX Plus API for metrics collection and dynamic reconfiguration of upstream groups, replacing the Status and Upstream Conf APIs that previously implemented those functions. As announced at the time, the deprecated APIs continued to be available and supported for a significant period of time, which ended with NGINX Plus R16. If your configuration includes the status
and/or upstream_conf
directives, you must replace them with the api
directive as part of the upgrade to R17.
For advice and assistance in migrating to the NGINX Plus API, please see the transition guide on our blog, or contact our support team.
New operating systems supported:
Older operating systems removed or scheduled for removal:
New Features in Detail
It’s been over 10 years since a major update to TLS. TLS 1.2 was defined in August 2008 as RFC 5246, and the Internet has changed significantly since then. TLS 1.3 was ratified in August 2018 as RFC 8446 to help address many of the issues found in TLS 1.2 and set a more scalable platform for the future.
Over the years numerous security vulnerabilities in TLS 1.2 have been disclosed, such as FREAK, Heartbleed, POODLE, and ROBOT. FREAK, for example, allows an attacker to downgrade a TLS connection to use an export cipher with a 40‑bit key length, which can be brute forced. TLS 1.3 removes export ciphers altogether.
Many of the issues that have arisen with TLS 1.2 and earlier specifications are due to the large number of user‑configurable options. Misuse of options often led to insecure configurations which could be exploited by attackers. TLS 1.3 removes a number of these options:
Notable in the list of removals is RSA key transport. This mode was used primarily because it was faster than Diffie‑Hellman, which required an additional round trip to set up the connection with perfect forward secrecy (PFS). With TLS 1.3 the additional round trip is no longer necessary. With fewer configuration options, there is less information to exchange and the Diffie‑Hellman handshake takes just one round trip to complete (the diagram also shows a GET
request following the handshake).
Additionally, TLS 1.3 supports session resumption, which makes connection establishment faster by eliminating the overhead of repeating the TLS handshake when a client returns to a previously visited site. This is also called 0‑RTT (zero round trip time) resumption, because no handshake messages have to go back and forth between client and server for the resumed session. Session resumption is implemented by creating a shared secret during the original session and storing it in a session ticket. When the client returns, it presents the session ticket along with its request, which is encrypted with the shared secret that’s in the ticket.
Using 0‑RTT opens up the risk of a replay attack as illustrated below. In this scenario, the attacker re‑sends a packet that results in a state change, such as a request to transfer money between two bank accounts.
To protect against replay attacks, the only HTTP request type that clients should send in the 0‑RTT data (the data encrypted with the shared secret) is GET
. HTTP GET
requests are idempotent by definition (RFC 7231), so replaying them has no effect. Loading a page is usually the first thing a client does when revisiting a site, and most page loads start with a GET
request, so enabling session resumption speeds up a large proportion of the requests to most websites. You might not want to enable 0‑RTT resumption when deploying NGINX Plus as an API gateway, however, because for API traffic resumed TLS sessions are more likely to contain non‑idempotent request types.
TLS 1.3 itself also protects against replay attacks by including timing information in the session ticket and the client request, which enables the server to determine if the request arrived from the client reasonably soon after the client sent it. An attacker can’t alter the timing information, so if the request took too long to arrive, it was probably replayed.
TLS 1.3 and 0‑RTT are not enabled by default.
To enable TLS 1.3, include the TLSv1.3
parameter to the ssl_protocols
directive. We recommend that you also include TLSv1.2
because not all browsers support TLS 1.3 at time of writing (see the next section). NGINX Plus uses TLS 1.3 if the client supports it, and falls back to TLS 1.2 otherwise.
To enable 0‑RTT, also set the ssl_early_data
directive to on
.
This configuration enables both features:
server { listen 443 ssl;
ssl_certificate /etc/ssl/my_site_cert.pem;
ssl_certificate_key /etc/ssl/my_site_key.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_early_data on; # Enable 0-RTT (TLS 1.3)
location / {
proxy_pass http://my_backend;
}
}
On the server side, TLS 1.3 requires OpenSSL 1.1.1 or later. As of this writing only Alpine Linux 3.9, FreeBSD 12.0, and Ubuntu 18.10 ship with OpenSSL 1.1.1.
On the client side, we recommend Chrome 70 or Firefox 63. They support the final version of TLS 1.3, but don’t enable it by default; follow these instructions to enable TLS 1.3 in the browser. As of this writing, other popular browsers (including Firefox for Android and Safari for both iOS and Mac) do not yet support TLS 1.3. For the latest status information, see Can I Use: TLS 1.3.
Previously, NGINX Plus could enforce limits on request rate in two ways: by rejecting excessive requests immediately, or by queuing excessive requests until they can be processed in compliance with the defined rate limit. With NGINX Plus R17, you can combine both enforcement methods for two‑stage rate limiting, whereby excessive requests are initially delayed, and ultimately rejected if the rate limit is still exceeded.
When applying rate limits, it is essential to consider the typical behavior of legitimate clients. For example, web browsers usually attempt to download several resources simultaneously, so it’s reasonable to see a request for the HTML content, followed quickly by requests for stylesheets, JavaScript code, and images. For this reason we might want to allow a burst of 10 to 20 rapid requests before applying a rate limit.
With NGINX Plus R17 you can now allow a burst to accommodate the typical web browser request pattern, and then delay additional excessive requests up to a point, beyond which additional excessive requests are rejected. Two‑stage rate limiting is enabled with the new delay
parameter to the limit_req
directive.
To illustrate two‑stage rate limiting, here we configure NGINX Plus to protect a website by imposing a rate limit of 5 requests per second (rate=5r/s
). The website typically has 4–6 resources per page, and never more than 12 resources. The configuration allows bursts of up to 12 requests, the first 8 of which are processed without delay. A delay is added after 8 excessive requests to enforce the 5 r/s limit. After 12 excessive requests, any further requests are rejected.
limit_req_zone $binary_remote_addr zone=ip:10m rate=5r/s;
server {
listen 80;
location / {
limit_req zone=ip burst=12 delay=8;
proxy_pass http://website;
}
}
The delay
parameter defines the point at which, within the burst size, excessive requests are throttled (delayed) to comply with the defined rate limit. With this configuration in place, a client that makes a continuous stream of requests at 8 r/s experiences the following behavior.
The first 8 requests (the value of delay
) are proxied by NGINX Plus without delay. The next 4 requests (burst
-
delay
) are delayed so that the defined rate of 5 r/s is not exceeded. The next 3 requests are rejected because the total burst size has been exceeded. Subsequent requests are delayed.
Note that this illustration is a simplified description of the process because it ignores the impact of completed requests on calculating how many excessive requests are being processed. In reality, each completed request opens up a slot in the delay queue for another excessive request to fit within the configured burst size. For more information about the rate‑limiting implementation, see Rate Limiting with NGINX and NGINX Plus on our blog.
When performing JWT validation, NGINX Plus R17 can now be configured to fetch the set of JSON Web Keys (JWKs) from the remote location (usually an identity provider, or IdP) specified by the new auth_jwt_key_request
directive. Automatic fetching is particularly convenient when integrating with an IdP that rotates keys frequently.
Most IdPs provide a fixed URL where the current set of keys can be obtained, especially if they support OpenID Connect Discovery; in that case, the URL to the keys is defined by the jwks_uri
value.
# Create a directory to cache the keys received from IdPproxy_cache_path /var/cache/nginx/jwk levels=1 keys_zone=jwk:1m max_size=10m;
server {
listen 80; # Use SSL/TLS in production
location / {
auth_jwt "closed site";
auth_jwt_key_request /_jwks_uri; # Keys will be fetched by subrequest
proxy_pass http://my_backend;
}
location = /_jwks_uri {
internal;
proxy_cache jwk; # Cache responses
proxy_pass https://idp.example.com/oauth2/keys; # Obtain keys from here
}
}
Additional caching directives can be used to override the Expires
and Cache-Control
headers returned by the subrequest. The use of proxy_cache_use_stale
enables NGINX Plus to continue using cached keys when the keys URL is unavailable.
Our OpenID Connect reference implementation has been updated to include support for auth_jwt_key_request
and automatic configuration for IdPs that support OpenID Connect Discovery.
JWT support has also been extended to support two variants of the Edwards‑curve Digital Signature Algorithm (EdDSA): Ed448 and Ed25519. Note that EdDSA requires OpenSSL 1.1.1 or later, which at the time of writing is available only in Ubuntu 18.10 and FreeBSD 12.0.
Note: OpenID Connect support is exclusive to NGINX Plus.
The NGINX ModSecurity WAF module for NGINX Plus is our supported build of the open source ModSecurity web application firewall (WAF) used by over a million sites. We are actively working with the TrustWave SpiderLabs team to improve ModSecurity performance with NGINX Plus, and are happy to report that the latest release performs two times better than previous releases.
This release also adds support for pdateActionById
, ctl:requestBodyProcessor=URLENCODED
, and the setenv
action.
The new NGINX ModSecurity WAF build is based on ModSecurity 3.0.3; for more details, see the TrustWave SpiderLabs blog.
Note: NGINX ModSecurity WAF is exclusive to NGINX Plus.
The new proxy_socket_keepalive
directive controls whether TCP keepalives are enabled between NGINX Plus and the proxied server. TCP keepalives improve performance for protocols (such as WebSocket) where there is a stateful TCP network device between NGINX and the proxied server, with connections that are long‑lived and often idle. Without TCP keepalives, such devices might close idle TCP connections more often, incurring the overhead of re‑establishing them from scratch.
The directive is also available in the FastCGI, gRPC, memcached, SCGI, and uwsgi modules.
The new keepalive_timeout
directive sets the maximum idle time before an HTTP keepalive connection between NGINX Plus and the proxied server is closed. In addition, the keepalive_requests
directive sets the maximum number of requests that can be sent over a keepalive connection (at which point it is closed and a new one created).
The new proxy_requests
directive (Stream module) sets the maximum number of UDP packets sent from NGINX Plus to the proxied server before a new UDP “session” is created. This provides for more even load balancing of UDP packets when a single client sends a large number of UDP packets in a short time (which can happen where there is a downstream proxy, for example).
When using state sharing in a cluster, you can now do server name verification, using SNI to pass the server name when connecting to cluster nodes. This is implemented with the zone_sync_ssl_name
and zone_sync_ssl_server_name
directives.
Note: Clustering and the zone_sync
module are exclusive to NGINX Plus
The official NGINX Ingress Controller for Kubernetes has been updated to version 1.4.0. The changelog lists all changes, fixes, and enhancements, and the most notable ones are highlighted on our blog:
Read more about the official NGINX Ingress Controller for Kubernetes on our website, and at the GitHub repo.
NGINX Plus R17 includes a number of enhancements that extend the scope of JavaScript language support:
arguments
objectsconsole.time()
and console.timeEnd()
Integration with the NGINX Stream module for TCP/UDP applications has been refactored to use various return functions, including a send()
method for modifying ingress traffic. Egress traffic is now available through a callback.
The full set of changes can be found at the NGINX JavaScript Module changelog.
NGINX Plus R17 includes support for TLS 1.3, which is more secure and performs better than TLS 1.2. If you’re running NGINX Plus, we strongly encourage you to upgrade to Release 17 and TLS 1.3 as soon as possible. You’ll also pick up a number of additional fixes and improvements, and it will help NGINX, Inc. to help you when you need to raise a support ticket.
Please carefully review the new features and changes in behavior described in this blog post before proceeding with the upgrade.
If you haven’t tried NGINX Plus or the NGINX ModSecurity WAF, we encourage you to try them out – for security, load balancing, and API gateway, or as a fully supported web server with enhanced monitoring and management APIs. You can get started for free today with a free 30‑day evaluation. See for yourself how NGINX Plus can help you deliver and scale your applications.
[Editor – NGINX ModSecurity WAF officially went End-of-Sale as of April 1, 2022 and is transitioning to End-of-Life effective March 31, 2024. For more details, see F5 NGINX ModSecurity WAF Is Transitioning to End-of-Life<.htmla> on our blog.]
"This blog post may reference products that are no longer available and/or no longer supported. For the most current information about available F5 NGINX products and solutions, explore our NGINX product family. NGINX is now part of F5. All previous NGINX.com links will redirect to similar NGINX content on F5.com."