An API gateway accepts API requests from a client, processes them based on defined policies, directs them to the appropriate services, and combines the responses for a simplified user experience. Typically, it handles a request by invoking multiple microservices and aggregating the results. It can also translate between protocols in legacy deployments.
An API gateway accepts API requests from a client, processes them based on defined policies, directs them to the appropriate services, and combines the responses for a simplified user experience. Typically, it handles a request by invoking multiple microservices and aggregating the results. It can also translate between protocols in legacy deployments.
API gateways commonly implement capabilities that include:
For additional app- and API-level security, API gateways can be augmented with web application firewall (WAF) and denial of service (DoS) protection.
Deploying an API gateway for app delivery can help:
For microservices‑based applications, an API gateway acts as a single point of entry into the system. It sits in front of the microservices and simplifies both the client implementations and the microservices app by decoupling the complexity of an app from its clients.
In a microservices architecture, the API gateway is responsible for request routing, composition, and policy enforcement. It handles some requests by simply routing them to the appropriate backend service, and handles others by invoking multiple backend services and aggregating the results.
An API gateway might provide other capabilities for microservices such as authentication, authorization, monitoring, load balancing, and response handling, offloading implementation of non-functional requirements to the infrastructure layer and helping developers to focus on core business logic, speeding up app releases.
Learn more about Building Microservices Using an API Gateway on our blog.
Containers are the most efficient way to run microservices, and Kubernetes is the de facto standard for deploying and managing containerized applications and workloads.
Depending on the system architecture and app delivery requirements, an API gateway can be deployed in front of the Kubernetes cluster as a load balancer (multi-cluster level), at its edge as an Ingress controller (cluster-level), or within it as a service mesh (service-level).
For API gateway deployments at the edge and within the Kubernetes cluster, it’s best practice to use a Kubernetes-native tool as the API gateway. Such tools are tightly integrated with the Kubernetes API, support YAML, and can be configured through standard Kubernetes CLI; examples include NGINX Ingress Controller and NGINX Service Mesh.
Learn more about API gateways and Kubernetes in API Gateway vs. Ingress Controller vs. Service Mesh on our blog.
Ingress gateways and Ingress controllers are tools that implement the Ingress object, a part of the Kubernetes Ingress API, to expose applications running in Kubernetes to external clients. They manage communications between users and applications (user-to-service or north-south connectivity). However, the Ingress object by itself is very limited in its capabilities. For example, it does not support defining the security policies attached to it. As a result, many vendors create custom resource definitions (CRDs) to expand their Ingress controller’s capabilities and satisfy evolving customer needs and requirements, including use of the Ingress controller as an API gateway.
For example, NGINX Ingress Controller can be used as a full-featured API gateway at the edge of a Kubernetes cluster with its VirtualServer and VirtualServerRoute, TransportServer, and Policy custom resources.
While their names are similar, an API gateway is not the same as the Kubernetes Gateway API. The Kubernetes Gateway API is an open source project managed by the Kubernetes community to improve and standardize service networking in Kubernetes. The Gateway API specification evolved from the Kubernetes Ingress API to solve various challenges around deploying Ingress resources to expose Kubernetes apps in production, including the ability to define fine-grained policies for request processing and delegate control over configuration across multiple teams and roles.
Tools built on the Gateway API specification, such as NGINX Kubernetes Gateway, can be used as API gateways for use cases that include routing requests to specific microservices, implementing traffic policies, and enabling canary and blue‑green deployments.
Watch this quick video where NGINX’s Jenn Gile explains the difference between an API gateway and the Kubernetes Gateway API.
A service mesh is an infrastructure layer that controls communications across services in a Kubernetes cluster (service-to-service or east-west connectivity). The service mesh delivers core capabilities for services running in Kubernetes, including load balancing, authentication, authorization, access control, encryption, observability, and advanced patterns for managing connectivity (circuit braker, A/B testing, and blue-green and canary deployments), to ensure that communication is fast, reliable, and secure.
Deployed closer to the apps and services, a service mesh can be used as a lightweight, yet comprehensive, distributed API gateway for service-to-service communications in Kubernetes.
Learn more about service mesh in How to Choose a Service Mesh on our blog.
The terms API gateway and API management are often – but incorrectly – used to describe the same functionality.
An API gateway is a data-plane entry point for API calls that represent client requests to target applications and services. It typically performs request processing based on defined policies, including authentication, authorization, access control, SSL/TLS offloading, routing, and load balancing.
API management is the process of deploying, documenting, operating, and monitoring individual APIs. It is typically accomplished with management-plane software (for example, an API manager) that defines and applies policies to API gateways and developer portals.
Depending on business and functional requirements, an API gateway can be deployed as a standalone component in the data plane, or as part of an integrated API management solution, such as F5 NGINX Management Suite API Connectivity Manager.
There are several key factors to consider when deciding on requirements for your API gateway:
NGINX offers several options for deploying and operating an API gateway depending on your use cases and deployment patterns.
Kubernetes‑native tools:
Get started by requesting your free 30-day trial of NGINX Ingress Controller with NGINX App Protect WAF and DoS, and download the always free NGINX Service Mesh.
Universal tools:
To learn more about using NGINX Plus as an API gateway, request your free 30-day trial and see Deploying NGINX as an API Gateway on our blog. To try NGINX Management Suite, request your free 30-day trial.