We’re pleased to announce that NGINX Plus Release 13 (R13) is now available as a free upgrade to all NGINX Plus subscribers. NGINX Plus is a combined web server, load balancer, and content cache built on top of NGINX Open Source. NGINX Plus R13 includes new features focused on dynamic deployments, enhanced debugging capabilities, and improved security and performance.
NGINX Plus R13 introduces support for:
njs
interactive shell provides a console that shows all of the built‑in objects for JavaScript. These objects can be investigated further to expose the available methods and primitives for each object. Further enhancements include improvements to the sticky learn method for session persistence, HTTP trailers support, and a new third‑party dynamic module for HTTP substitutions.
sticky_cookie_insert
directive has been removed in NGINX Plus R13, having been deprecated in NGINX Plus R2.Directive in ModSecurity module no longer supported – The SecRequestBodyInMemoryLimit
directive for ModSecurity is no longer supported. Customers may safely remove this directive, because the ModSecurity module obeys the request‑body handling defined by the NGINX configuration.
[Editor – The NGINX ModSecurity WAF module for NGINX Plus officially went End-of-Sale as of April 1, 2022 and is transitioning to End-of-Life effective March 31, 2024. For more details, see F5 NGINX ModSecurity WAF Is Transitioning to End-of-Life<.htmla> on our blog.]
NGINX Plus R13 includes a new REST API unified under a single endpoint. Previous versions of NGINX Plus included separate Upstream Conf and Extended Status APIs. The new API combines the functionality of both, and also supports the new Key‑Value Store module in a variety of use cases for dynamic configuration (discussed in the Key‑Value Store section below).
To enable the NGINX Plus API, include the new api
directive in a location
block:
server { listen 80;
location /api {
api write=on;
# directives that allow access only by authorized users
}
}
By default, the NGINX Plus API provides read‑only access to data. Add the write=on
parameter to the api
directive to enable read/write access so that changes can be made to upstream servers and the new Key‑Value Store module. We strongly recommend restricting access to the API to authorized users only, especially when read/write mode is enabled.
To see all the types of information available from the API endpoint, run this command:
$ curl http://localhost:80/api/1/ ["nginx","processes","connections","ssl","slabs","http","stream"]
To display details about a specific type of information, append the appropriate string to the request URI:
connections
– Display metrics for total connectionshttp
– Display metrics for HTTP traffic and modify HTTP upstream configuration
There are also two “subtypes” under http
:
http/server_zones
– Display information about HTTP virtual servershttp/upstreams
– Display information about HTTP upstream server groups and modify their configurationnginx
– Display general information about NGINXprocesses
– Display information about NGINX worker processesslabs
– Display information on shared memory allocated by NGINXssl
– Display metrics for SSL/TLS clients in real timestream
– Display metrics for TCP/UDP traffic and modify configuration of TCP/UDP upstream server groups (at stream/upstreams
)NGINX Plus reports more than 40 exclusive metrics on top of what’s available in NGINX Open Source. You can now access these metrics using the NGINX Plus API. Use the API to access the metrics important to you.
As an example, append connections
to the URI to output a snapshot of connection status, which includes the number of accepted, active, dropped, and idle client connections.
$ curl http://localhost:80/api/1/connections{"accepted":3,"dropped":0,"active":1,"idle":0}
Another example: append ssl
to the URI to output a snapshot of SSL client statistics in real time.
$ curl http://localhost:80/api/1/ssl {"handshakes":0,"handshakes_failed":0,"session_reuses":0}
In NGINX Plus R12 and earlier, you could use the upstream_conf
directive to enable dynamic configuration of existing upstream server groups without reloading NGINX Plus. This functionality is now incorporated into the NGINX Plus API.
This NGINX Plus configuration snippet defines two servers in the upstream group called backend, and enables the NGINX Plus API at /api:
upstream backend { zone backends 64k;
server 10.10.10.2;
server 10.10.10.4;
}
server {
listen 80;
server_name www.example.org;
location /api {
api write=on;
}
}
To add a server to the backend group, include the -d
option in a curl
request to /api/1/http/upstreams/backend/servers, with JSON text that defines the new server’s IP address (here, 10.10.10.6). The -i
option means HTTP headers are included in the response. (You can omit -X POST
because that is the default method with -d
, but we include it for consistency with other methods.)
$ curl -iX POST -d '{"server":"10.10.10.6"}' http://localhost/api/1/http/upstreams/backend/serversHTTP/1.1 201 Created
...
For details about all options for configuring upstream groups, see the reference documentation for the NGINX Plus API module.
NGINX Plus R13 introduces a new Key-Value Store module. You can use the NGINX Plus API to create, modify, and remove key‑value pairs on the fly in one or more “keyval” shared memory zones. The value of each key‑value pair can then be evaluated as a variable for use by other NGINX Plus features.
To add, modify, read, and delete entries in the key‑value store, use the POST
, PATCH
, GET
, and DELETE
HTTP methods respectively. The key‑value store provides a wealth of dynamic configuration solutions to enable real‑time integration with external systems.
Sample use cases include:
The following configuration snippet uses the Key-Value Store module to manage vanity URLs for a website.
keyval_zone zone=redirects:1M state=state/redirects.json; # Save key-val pairs to filekeyval $uri $target zone=redirects; # $uri is the key, $target is the value
server {
listen 80;
location /api {
api write=on; # Enable the NGINX Plus API (secure this location in production environments)
}
if ($target) { # True when $uri exists in the 'redirects' keyval zone
return 301 $target; # Redirect client to the matching value for the $uri
}
location / {
proxy_pass http://backend;
}
}
In the keyval
directive, the key is set to the URI from the remote machine issuing the HTTP request. If $uri
is a key in the key‑value store, the value associated with the key is assigned to a new variable called $target
. Then if $target
exists, NGINX Plus redirects the client to the matching value of $uri
.
To populate the key‑value store with an initial vanity URL, we send the data, encoded as JSON, to the URI for the NGINX Plus API.
$ curl -iX POST -d '{"/conf":"/conf2017"}' http://localhost/api/1/http/keyvals/redirectsHTTP/1.1 201 Created
...
Now clients that request /conf are redirected to /conf2017.
$ curl -i http://localhost/confHTTP/1.1 301 Moved Temporarily
Location: http://localhost/conf2017
You can use the PATCH
method to add more vanity URL redirects to the key‑value store and modify existing entries dynamically.
$ curl -iX PATCH -d '{"/conf":"/conf2018"}' http://localhost/api/1/http/keyvals/redirectsHTTP/1.1 204 No Content
...
You can configure multiple separate key‑value stores by using the keyval
directive to define a different shared memory zone for each one. For more information, see the reference documentation for the Key-Value Store module.
The new NGINX Plus API comes with a Swagger specification that can be used to explore the API and understand the capabilities of each resource. The Swagger documentation is bundled with NGINX Plus and can be accessed at http://nginx-host/swagger-ui/.
The interactive part of the Swagger UI requires the NGINX Plus API to be enabled, which can be achieved by uncommenting the /api/ location block in the conf.d/default.conf file.
# enable /api/ location with appropriate access control in order# to make use of NGINX Plus API
#
#location /api/ {
# api write=on;
# allow 127.0.0.1;
# deny all;
#}
You can also explore the NGINX Plus API documentation at https://demo.nginx.com/swagger-ui/.
Note: The entire NGINX Plus API, including the extended status metrics, upstream configuration, and the new Key‑Value Store module, is exclusive to NGINX Plus.
With NGINX Plus R13, you can enable HTTP request mirroring. With this feature, HTTP requests that are proxied to an upstream group are cloned and also sent to a different destination. The original request is processed as usual, but any responses from the cloned request are ignored. There are many use cases for request mirroring, including:
Enabling request mirroring has negligible impact on overall system throughput and performance. The following configuration snippet shows how to use the new mirror
directive to clone requests and pass them to a separate upstream server.
location / { mirror /mirror;
proxy_pass http://backend;
}
location /mirror {
internal;
proxy_pass http://test_backend$request_uri;
}
Requests are proxied to the backend
upstream group for regular processing. They are also cloned and proxied to a separate upstream group named test_backend, retaining the URI from the original request.
Note: Request mirroring was initially released in NGINX Open Source 1.13.4.
Since becoming generally available in NGINX Plus R12, the NGINX JavaScript module (formerly called nginScript) continues to be extended with core JavaScript language support. With this release, we introduce support for hexadecimal numbers (such as 0x7b) and scientific notation (such as 512e10). Primitive methods for the Object
class have also been implemented.
NGINX JavaScript now also offers an interactive shell, invoked with the njs
command, to assist with the development of NGINX JavaScript code.
The following shell snippet shows how to enter the NGINX JavaScript interactive shell, define an expression that produces a random date up to 30 seconds in the future, and calculate the sum of two numbers.
$ njsinteractive njscript
>> Date.now() + Math.round(Math.random()*30*1000);
1500976350968
>> 0x7b + 512e10;
5120000000123
>>
To learn more, see the introduction to NGINX JavaScript<.htmla> on our blog.
Note: NGINX JavaScript is available for both NGINX Open Source and NGINX Plus.
NGINX 1.11.5 and NGINX Plus R11 introduced support for compiling dynamic modules independently of NGINX itself. This allows users of NGINX and NGINX Plus to use the official builds from NGINX, Inc. repositories and load in only the dynamic modules they need.
With NGINX Plus R13, we provide a build tool for compiling and packaging a dynamic module as an installable module that preserves and honors the dependency between it and the base NGINX version that it is linked to.
For complete details about the build tool, see Creating Installable Packages for Dynamic Modules on our blog.
Note: The build tool is available for both NGINX Open Source and NGINX Plus.
Session persistence is a very useful feature of NGINX Plus load balancing that enables you to send all requests from a particular client to one server. There are multiple ways to establish session persistence; with the “sticky learn” method, NGINX Plus looks for the presence of a specific cookie and pins the client to the same server whenever that cookie is included in a request.
With NGINX Plus R13 you can now establish a sticky session as soon as the upstream server has sent the headers of its response, instead of waiting until the complete response payload has arrived. NGINX Plus can thus send the sticky session to the client at the earliest opportunity. Include the new header
parameter to the sticky
learn
directive:
upstream backends { zone backends 64k;
server 10.10.10.2;
server 10.10.10.4;
sticky learn create=$upstream_cookie_sessionid
lookup=$cookie_sessionid
zone=client_sessions:1m
header;
}
The header
parameter is particularly useful if an application is prone to errors and you want the client to resend failed requests to the same upstream server.
Note: Sticky‑learn session persistence is exclusive to NGINX Plus.
NGINX Plus R13 introduces the following additional features:
add_trailer
directive enables arbitrary trailers to be added to the end of HTTP responses. The trailer response header allows the sender to include additional fields at the end of chunked messages to supply metadata that might be dynamically generated while the message body is sent, such as a message integrity check or a digital signature.worker_shutdown_timeout
directive to set a timeout that enables graceful shutdown of worker processes to complete more quickly. When the timeout expires after a shutdown or restart signal is received, NGINX Plus attempts to close all open client connections.If you’re running NGINX Plus, we strongly encourage you to upgrade to Release 13 as soon as possible. You’ll pick up a number of fixes and improvements, and it will help us to help you if you need to raise a support ticket. Installation and upgrade instructions can be found at the customer portal.
Please carefully review the new features and changes in behavior described in this blog post before proceeding with the upgrade.
If you’ve not tried NGINX Plus, we encourage you to try it out for web acceleration, load balancing, and application delivery, or as a fully supported web server with enhanced monitoring and management APIs. You can get started for free today with a 30‑day evaluation and see for yourself how NGINX Plus can help you deliver and scale out your applications.
"This blog post may reference products that are no longer available and/or no longer supported. For the most current information about available F5 NGINX products and solutions, explore our NGINX product family. NGINX is now part of F5. All previous NGINX.com links will redirect to similar NGINX content on F5.com."