The Greatest Hits of 2021 on the NGINX Blog

NGINX | December 22, 2021

As the turmoil caused by the COVID‑19 pandemic continued throughout 2021, we at NGINX tried to rise to the challenge and keep driving forward to make positive changes for our community, partners, and customers.

Approaching the end of the year, we take this opportunity to look back at a selection of the biggest and most popular blog articles we published, as voted by you, our community. Read on to see what major events we covered, and to catch up on interesting news and topics you may have missed!

How to Choose a Service Mesh

As your Kubernetes deployment matures, it can be a challenge to know when a service mesh will yield benefits and not just additional complexity. And once you know you need a service mesh, choosing the right one isn’t always straightforward either. In this post Jenn Gile provides a six‑point checklist for determining whether you need a service mesh, and a conversation guide for facilitating the strategic decision‑making session that we recommend you have with your team and stakeholders about which service mesh is right for you.

NGINX and HAProxy: Testing User Experience in the Cloud

Many performance benchmarks measure peak throughput or requests per second (RPS), but those metrics might not tell the whole performance story at real‑world sites. This leads us to the observation that what matters most is that you deliver consistent, low‑latency performance to all of your users, even under high load. In comparing NGINX and HAProxy running on Amazon Elastic Compute Cloud (EC2) as reverse proxies, Amir Rawdat set out to do two things:

  1. Determine what level of load each proxy comfortably handles
  2. Collect the latency percentile distribution, which we find is the metric most directly correlated with user experience

Get the results and all the testing details.

Introducing NGINX Instance Manager

NGINX really can be considered as a Swiss Army Knife™ that accelerates your IT infrastructure and application modernization efforts. This wide‑ranging, versatile functionality can, however, lead to many NGINX instances spread across an organization, sometimes with NGINX Open Source and NGINX Plus managed by different groups. How do you track all the instances? How do you ensure they have up-to-date configuration and security settings? That’s where F5 NGINX Instance Manager comes in.

[Editor – F5 NGINX Instance Manager is now Instance Manager, part of F5 NGINX Management Suite.]

Ideal for DevOps users who are NGINX experts and have a lot of experience with NGINX configurations, NGINX Instance Manager simplifies NGINX management, configuration, and visibility. In this post, Karthik Krishnaswamy explains how NGINX Instance Manager can benefit you.

What Are Namespaces and cgroups, and How Do They Work?

NGINX Unit supports both namespaces and cgroups, which enables process isolation. In this post, Scott van Kalken looks at these two major Linux technologies, which also underlie containers. Learn about these underlying technologies and how to create them.

Comparing NGINX Performance in Bare Metal and Virtual Environments

While there was an explosive growth in public cloud adoption due to the COVID‑19 pandemic, enterprises are also embracing hybrid cloud, where they run workloads in both public clouds and on premises. To help you determine the optimal and most affordable solution that satisfies your performance and scaling needs, we provide a sizing guide that compares NGINX performance in the two environments.

In this post Amir Rawdat describes how we tested NGINX to arrive at the values published in the sizing guide. Because many of our customers also deploy apps in Kubernetes, we also step through our testing of NGINX Ingress Controller on the Rancher Kubernetes Engine (RKE) platform, and discuss how the results compare to NGINX running in traditional on‑premises architectures.

How to Simplify Kubernetes Ingress and Egress Traffic Management

One of the ways a service mesh can actually make it more complicated to manage a Kubernetes environment is when it must be configured separately from the Ingress controller. You can avoid these problems – and save time – by integrating the NGINX Plus-based F5 NGINX Ingress Controller with F5 NGINX Service Mesh to control both ingress and egress mTLS traffic. In this post, Kate Osborn covers the complete steps from the companion video demo.

Easy and Robust Single Sign-On with OpenID Connect and NGINX Ingress Controller

With the release of NGINX Ingress Controller 1.10.0, we were happy to announce a major enhancement: a technology preview of OpenID Connect (OIDC) authentication. OIDC is the identity layer built on top of the OAuth 2.0 framework which provides an authentication and single sign‑on (SSO) solution for modern apps. Our OIDC policy is a full‑fledged SSO solution enabling users to securely authenticate with multiple applications and Kubernetes services. Significantly, it enables apps to use an external identity provider (IdP) to authenticate users and frees the apps from having to handle usernames or passwords. Amir Rawdat explains it all for you in this popular post.

Deploying NGINX Ingress Controller on Amazon EKS: How We Tested

Last, but by no means least, in our 2021 blog round‑up, earlier this year we updated our NGINX Ingress Controller solution brief with sizing guidelines for Amazon Elastic Kubernetes Service (EKS). The brief outlines the performance you can expect to achieve with the NGINX Ingress Controller running on various instance types in Amazon EKS, along with the estimated monthly total cost of ownership (TCO). In this post, Amir Rawdat returns to explain how we came up with those numbers, including all the information you need to do similar testing of your own.

Give NGINX a Try

Free 30-day trials are available for all of the commercial solutions mentioned in this post (and a couple more!):

Or get started with free and open source offerings:


Share

Related Blog Posts

Automating Certificate Management in a Kubernetes Environment
NGINX | 10/05/2022

Automating Certificate Management in a Kubernetes Environment

Simplify cert management by providing unique, automatically renewed and updated certificates to your endpoints.

Secure Your API Gateway with NGINX App Protect WAF
NGINX | 05/26/2022

Secure Your API Gateway with NGINX App Protect WAF

As monoliths move to microservices, applications are developed faster than ever. Speed is necessary to stay competitive and APIs sit at the front of these rapid modernization efforts. But the popularity of APIs for application modernization has significant implications for app security.

How Do I Choose? API Gateway vs. Ingress Controller vs. Service Mesh
NGINX | 12/09/2021

How Do I Choose? API Gateway vs. Ingress Controller vs. Service Mesh

When you need an API gateway in Kubernetes, how do you choose among API gateway vs. Ingress controller vs. service mesh? We guide you through the decision, with sample scenarios for north-south and east-west API traffic, plus use cases where an API gateway is the right tool.

Deploying NGINX as an API Gateway, Part 2: Protecting Backend Services
NGINX | 01/20/2021

Deploying NGINX as an API Gateway, Part 2: Protecting Backend Services

In the second post in our API gateway series, Liam shows you how to batten down the hatches on your API services. You can use rate limiting, access restrictions, request size limits, and request body validation to frustrate illegitimate or overly burdensome requests.

New Joomla Exploit CVE-2015-8562
NGINX | 12/15/2015

New Joomla Exploit CVE-2015-8562

Read about the new zero day exploit in Joomla and see the NGINX configuration for how to apply a fix in NGINX or NGINX Plus.

Why Do I See “Welcome to nginx!” on My Favorite Website?
NGINX | 01/01/2014

Why Do I See “Welcome to nginx!” on My Favorite Website?

The ‘Welcome to NGINX!’ page is presented when NGINX web server software is installed on a computer but has not finished configuring

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us