We are pleased to announce that NGINX Plus Release 20 (R20) is now available. NGINX Plus is the only all-in-one load balancer, content cache, web server, and API gateway. Based on NGINX Open Source, NGINX Plus includes exclusive enhanced features and award‑winning support.
NGINX Plus R20 builds on the enhancements we made to rate limiting and the key‑value store in R19. New features include:
Obsolete APIs – NGINX Plus R13 (released in August 2017) introduced the all‑new NGINX Plus API<.htmlspan> for metrics collection and dynamic reconfiguration of upstream groups, replacing the Status and Upstream Conf APIs that previously implemented those functions. As announced at the time, the deprecated APIs continued to be available and supported for a significant period of time, which ended with NGINX Plus R16. If your configuration includes the status
or upstream_conf
directives, you must replace them with the api
directive as part of the upgrade to R20.
For advice and assistance in migrating to the new NGINX Plus API, please see the transition guide on our blog, or contact our support team.
New operating systems supported –
For more information about supported platforms, see the technical specifications for NGINX Plus and dynamic modules.
NGINX Plus R20 introduces features that make it easier for Operations and DevOps teams to monitor rate‑limiting activity in real time and to determine exactly which clients have exceeded the rate limit.
NGINX Plus has always provided a great deal of flexibility in how you define the types of client request to rate limit and how excessive requests are processed. Each request is handled in one of the following ways:
In previous releases, the error log was the only place where NGINX Plus recorded the fact that requests were delayed or rejected, in standardized entries like this one:
2019/12/02 11:42:12 [error] 57#57: *339 limiting requests, excess: 0.600 by zone "my_limit", client: 172.17.0.1, server: www.example.com, request: "GET / HTTP/1.0", host: "www.example.com:80"
Extracting useful information from the error log can be challenging, because the message format is unstructured and not configurable. Further, if the rate limit is keyed on a message characteristic other than those noted in the error log entry – for example, HTTP headers or identity information – then you must find the corresponding entry in the access log to determine exactly which client exceeded the rate limit. The new features address these issues.
A new endpoint to the NGINX Plus API, /api/version/http/limit_reqs
, maintains counters for all outcomes to rate‑limiting decisions made for each zone defined by a limit_req_zone
directive. These counters can be both used to monitor rate‑limiting decisions in real time and integrated with APM solutions to provide dashboards and alerting about rate‑limiting activity.
In the following example, there is one defined zone, my_limit:
$ curl http://localhost/api/6/http/limit_reqs
{
"my_limit": {
"delayed": 540,
"delayed_dry_run": 12162,
"passed": 804541,
"rejected": 63,
"rejected_dry_run": 1209
}
}
Notice that these counters also include entries for excessive requests processed in dry‑run mode, which was introduced in NGINX Plus R19.
Real‑time metrics are very useful for understanding when NGINX Plus is processing excessive requests but it does not tell you who is generating them. To address this challenge, NGINX Plus R20 provides a new $limit_req_status
variable. It records the request’s rate‑limiting status: DELAYED
, DELAYED_DRY_RUN
, PASSED
, REJECTED
, or REJECTED_DRY_RUN
.
You can then include other variables in the log format that uniquely identify the client, the URI, and any other relevant information. In the following configuration, a strict rate limit of 10 requests per second is enforced for each client, based on JSON Web Token (JWT) validation (line 1). Excessive requests are rejected (line 16), and logged to a separate log file (line 21), which also includes the $jwt_claim_sub
variable to capture the sub
claim (line 4).
Sample entries in the reject.log file:
time=1575289305.350 client=10.0.0.1 sub=adam uri=/ status=429 limit_req=REJECTEDtime=1575289305.395 client=10.0.0.1 sub=adam uri=/ status=429 limit_req=REJECTED
time=1575289305.402 client=10.0.0.1 sub=adam uri=/ status=429 limit_req=REJECTED
In addition to rate limiting for requests, NGINX Plus supports limiting for client connections with the Limit Connections module. You can define how many separate connections a client can open to NGINX Plus (or the number of concurrent requests when using HTTP/2). A client is typically identified by remote IP address (the $remote_addr
or $binary_remote_addr
variable), but you can use another variable (such as $jwt_claim_sub
for the username in a JWT) when the remote IP address is ambiguous or possibly shared by multiple clients.
NGINX Plus R20 extends connection limiting with the same enhancements to rate limiting introduced in NGINX Plus R19 and this release:
limit_conn_dry_run
directive/api/version/http/limit_conns
$limit_conn_status
, that captures the connection‑limiting decision for each request (PASSED
, REJECTED
, or REJECTED_DRY_RUN
) and can be used as described in Logging Rate‑Limiting Activity in the Access Log for the $limit_req_status
variableThe following configuration applies a low bandwidth to clients that open more than ten concurrent connections.
With the in‑memory key‑value store for NGINX Plus, you can use the NGINX Plus API to dynamically configure traffic management based on attributes of the request. Sample use cases include dynamic denylisting of IP addresses, dynamic bandwidth limiting, and caching of authentication tokens.
NGINX Plus R20 adds support for matching keys against a specified prefix (characters at the beginning of a string), enabling a new set of use cases for the key‑value store. For example, being able to match keys against URI prefixes (base paths) rather than exact URIs means you can create a dynamic routing table to map each base path to an upstream group, replacing or augmenting the static mappings defined by location
directives.
To enable prefix matching, include the new type=prefix
parameter to the keyval_zone
directive. In the following configuration, the keyval
directive associates a Cache-Control
directive (such as max-age
or no-cache
) with each URI prefix, and the add_header
directive sets the Cache-Control
response header to that value as NGINX Plus forwards the request to the upstream server.
We use the NGINX Plus API to define the value of the Cache-Control
response header for each base path (in this case, paths beginning with /images/ or /reports/) in the key‑value store:
$ curl -i -X POST --data '{"/images/":"max-age=3600", "/reports/":"no-cache"}' http://localhost:8080/api/6/http/keyvals/pathsHTTP/1.1 201 Created
Server: nginx/1.17.6
Date: Mon, 2 Dec 2019 12:36:31 GMT
Content-Length: 0
Location: http://localhost:8080/api/6/http/keyvals/paths/
Connection: keep-alive
When we make a request with a base path that exists in the key‑value store, the response includes the Cache-Control
header that we set:
$ curl -I http://localhost/images/sample.jpgHTTP/1.1 200 OK
Server: nginx/1.17.6
Date: Mon, 2 Dec 2019 12:27:39 GMT
Content-Type: image/jpeg
Content-Length: 120847
Connection: keep-alive
Cache-Control: max-age=3600
Because the key‑value store can be synchronized across a cluster of NGINX Plus instances, you have to make each API call to just one instance. This makes the process of automating changes to cluster configuration much simpler than coordinating changes to configuration files.
When using NGINX Plus to perform load balancing across a number of upstream servers, you can define the members of the upstream group by specifying a hostname that resolves to multiple IP addresses. This is particularly useful in dynamic or auto‑scaling environments where the members of the upstream group can change frequently.
To complete the configuration of these dynamic upstream groups, you also include the resolver
directive to designate the DNS server or servers that NGINX Plus queries for the IP addresses associated with the hostname. In previous releases, a resolver
directive applied to all of the upstream groups referenced by proxy_pass
directives in the context (http
, server
, or location
) containing the directive. With NGINX Plus R20 you can now specify a different DNS resolver for each upstream group.
The new flexibility is especially useful in a DevOps environment – it allows application teams to own more of their application delivery infrastructure, including the DNS servers and service registries, instead of relying on centralized, shared services.
You can still define a global resolver in the top‑level http
context, and in server
and location
blocks. If an upstream
block does not include a resolver
directive, it inherits the resolver
setting of the context or block that includes a proxy_pass
directive referencing the upstream group, as in previous releases.
In the following example, the website upstream group uses the global resolver whereas mobile_app uses its own resolver:
Notice that we are including the status_zone
parameter (introduced in NGINX Plus R19) to both resolver
directives, to collect error and other metrics about resolver activity.
The PROXY Protocol is a mechanism by which a Layer 4 proxy can convey information about the original client connection to the next proxy or load balancer that handles the client request. This is particularly important for use cases where you need to know the client’s IP address – for example, to limit the number of connections made by each client (with the least_conn
directive) or simply to log the real client’s IP address. As in previous releases, the $proxy_protocol_addr
variable captures this information.
When there are multiple Layer 4 proxies deployed in an application environment, it is sometimes important for NGINX Plus also to know the IP address and port of the proxy server to which the client originally connected. The PROXY Protocol metadata includes this information, and NGINX Plus R20 adds variables to capture it:
The variables are available for both the HTTP and Stream (TCP/UDP) modules, and support both version 1 and 2 of the PROXY Protocol. Note that before using the variables you must explicitly enable NGINX Plus to handle the PROXY Protocol, by adding the proxy_protocol
parameter to the listen
directive.
NGINX Plus R18 P1 addressed three security vulnerabilities in HTTP/2 which were announced in August. NGINX Plus R20 includes additional changes that improve the overall security of our HTTP/2 implementation:
worker_shutdown_timeout
directive for long‑lived HTTP/2 connectionsproxy_request_buffering
directive for HTTP/2 clientsIf you are using HTTP/2 in production with NGINX Plus R18 (unpatched) or earlier, we recommend upgrading to NGINX Plus R20 at your earliest convenience.
The NGINX JavaScript module (njs) has been updated to version 0.3.7, adding support for more JavaScript objects:
Function()
constructorObject.assign()
methodNumber
methods: toFixed()
, toPrecision()
, and toExponential()
Array.prototype.copyWithin()
methodconsole.time()
methodTo learn more about njs check out the project home page and our blog<.htmla>.
If you’re running NGINX Plus, we strongly encourage you to upgrade to NGINX Plus R20 as soon as possible. You’ll also pick up a number of additional fixes and improvements, and it will help NGINX to help you when you need to raise a support ticket.
Please carefully review the new features and changes in behavior described in this blog post before proceeding with the upgrade.
If you haven’t tried NGINX Plus, we encourage you to try it out – for security, load balancing, and API gateway, or as a fully supported web server with enhanced monitoring and management APIs. You can get started today with a free 30-day trial. See for yourself how NGINX Plus can help you deliver and scale your applications.
"This blog post may reference products that are no longer available and/or no longer supported. For the most current information about available F5 NGINX products and solutions, explore our NGINX product family. NGINX is now part of F5. All previous NGINX.com links will redirect to similar NGINX content on F5.com."