Today’s enterprise is often made up of globally distributed teams building and deploying APIs and microservices, usually across more than one deployment environment. According to F5’s State of Application Strategy Report, 81% of organizations operate across three or more environments ranging across public cloud, private cloud, on premises, and edge.
Ensuring the reliability and security of these complex, multi‑cloud architectures is a major challenge for the Platform Ops teams responsible for maintaining them. According to software engineering leaders surveyed in the F5 report, visibility (45%) and consistent security (44%) top the list of multi‑cloud challenges faced by Platform Ops teams.
With the growing number of APIs and microservices today, API governance is quickly becoming one of the most important topics for planning and implementing an enterprise‑wide API strategy. But what is API governance, and why is it so important for your API strategy?
At the most basic level, API governance involves creating policies and running checks and validations to ensure APIs are discoverable, reliable, observable, and secure. It provides visibility into the state of the complex systems and business processes powering your modern applications, which you can use to guide the evolution of your API infrastructure over time.
The strategic importance of API governance cannot be overestimated: it’s the means by which you realize your organization’s overall API strategy. Without proper governance, you can never achieve consistency across the design, operation, and productization of your APIs.
When done poorly, governance often imposes burdensome requirements that slow teams down. When done well, however, API governance reduces work, streamlines approvals, and allows different teams in your organization to function independently while delivering on the overall goals of your API strategy.
Building an effective API governance plan as part of your API strategy starts with identifying the types of APIs you have in production, and the tools, policies, and guidance you need to manage them. Today, most enterprise teams are working with four primary types of APIs:
Each type of API in the enterprise must be governed to ensure it is secure, reliable, and accessible to the teams and users who need to access it.
There are many ways to define and apply API governance. At NGINX, we typically see customers applying one of two models:
As companies progress in their API‑first journeys, however, both models start to break down as the number of APIs in production grows. Centralized models often try to implement a one-size-fits-all approach that requires various reviews and signoffs along the way. This slows dev teams down and creates friction – in their frustration, developers sometimes even find ways to work around the requirements (the dreaded “shadow IT”).
The other model – decentralized governance – works well for API developers at first, but over time complexity increases. Unless the different teams deploying APIs communicate frequently, the overall experience becomes inconsistent across APIs: each is designed and functions differently, version changes result in outages between services, and security is enforced inconsistently across teams and services. For the teams building APIs, the additional work and complexity eventually slows development to a crawl, just like the centralized model.
Cloud‑native applications rely on APIs for the individual microservices to communicate with each other, and to deliver responses back to the source of the request. As companies continue to embrace microservices for their flexibility and agility, API sprawl will not be going away. Instead, you need a different approach to governing APIs in these complex, constantly changing environments.
Fortunately, there is a better way. Adaptive governance offers an alternative model that empowers API developers while giving Platform Ops teams the control they need to ensure the reliability and security of APIs across the enterprise.
At the heart of adaptive governance is balancing control (the need for consistency) with autonomy (the ability to make local decisions) to enable agility across the enterprise. In practice, the adaptive governance model unbundles and distributes decision making across teams.
Platform Ops teams manage shared infrastructure (API gateways and developer portals) and set global policies to ensure consistency across APIs. Teams building APIs, however, act as the subject matter experts for their services or line of business. They are empowered to set and apply local policies for their APIs – role‑based access control (RBAC), rate limiting for their service, etc. – to meet requirements for their individual business contexts.
Adaptive governance allows each team or line of business to define its workflows and balance the level of control required, while using the organization’s shared infrastructure.
As you start to plan and implement your API strategy, follow these best practices to implement adaptive governance in your organization:
Let’s look at how you can accomplish these use cases with API Connectivity Manager, part of F5 NGINX Management Suite.
Teams across your organization are building APIs, and they need to include similar functionality in their microservices: authentication and authorization, mTLS encryption, and more. They also need to make documentation and versioning available to their API consumers, be those internal teams, business partners, or external developers.
Rather than requiring teams to build their own solutions, Platform Ops teams can provide access to shared infrastructure. As with all actions in API Connectivity Manager, you can set this up in just a few minutes using either the UI or the fully declarative REST API, which enables you to integrate API Connectivity Manager into your CI/CD pipelines. In this post we use the UI to illustrate some common workflows.
API Connectivity Manager supports two types of Workspaces: infrastructure and services. Infrastructure Workspaces are used by Platform Ops teams to onboard and manage shared infrastructure in the form of API Gateway Clusters and Developer Portal Clusters. Services Workspaces are used by API developers to publish and manage APIs and documentation.
To set up shared infrastructure, first add an infrastructure Workspace. Click Infrastructure in the left navigation column and then the + Add button in the upper right corner of the tab. Give your Workspace a name (here, it’s team-sentence – an imaginary team building a simple “Hello, World!” API).
Next, add an Environment to the Workspace. Environments contain API Gateway Clusters and Developer Portal Clusters. Click the name of your Workspace and then the … icon in the Actions column; select Add from the drop‑down menu.
The Create Environment panel opens as shown in Figure 2. Fill in the Name (and optionally, Description) field, select the type of environment (production or non‑production), and click the + Add button for the infrastructure you want to add (API Gateway Clusters, Developer Portal Clusters, or both). Click the Create button to finish setting up your Environment. For complete instructions, see the API Connectivity Manager documentation.
Providing logical separation for teams by line of business, geographic region, or other logical boundary makes sense – if that doesn’t deprive them of access to the tools they need to succeed. Having access to shared infrastructure shouldn’t mean teams have to worry about activities at the global level. Instead, you want to have them focus on defining their own requirements, charting a roadmap, and building their microservices.
To help teams organize, Platform Ops teams can provide services Workspaces for teams to organize and operate their services and documentation. These create logical boundaries and provide access to different environments – development, testing, and production, for example – for developing services. The process is like creating the infrastructure Workspace we made in the previous section.
First, click Services in the left navigation column and then the + Add button in the upper right corner of the tab. Give and give your Workspace a name (here, api-sentence for our “Hello, World” service), and optionally provide a description and contact information.
At this point, you can invite API developers to start publishing proxies and documentation in the Workspace you’ve created for them. For complete instructions on publishing API proxies and documentation, see the API Connectivity Manager documentation.
Adaptive governance requires a balance between enforcing global policies and empowering teams to make decisions that boost agility. You need to establish a clear separation of responsibilities by defining the global settings enforced by Platform Ops and setting “guardrails” that define the tools API developers use and the decisions they can make.
API Connectivity Manager provides a mix of global policies (applied to shared infrastructure) and granular controls managed at the API proxy level.
Global policies available in API Connectivity Manager include:
API proxy policies available in API Connectivity Manager include:
GET
, POST
, PUT
, etc.)In the following example, we use the UI to define a policy that secures communication between an API Gateway Proxy and backend services.
Click Infrastructure in the left navigation column. After you click the name of the Environment containing the API Gateway Cluster you want to edit, the tab displays the API Gateway Clusters and Developer Portal Clusters in that Environment.
In the row for the API Gateway Cluster to which you want to apply a policy, click the … icon in the Actions column and select Edit Advanced Configuration from the drop‑down menu. Click Global Policies in the left column to display a list of all the global policies you can configure.
To apply the TLS Backend policy, click the … icon at the right end of its row and select Add Policy from the drop‑down menu. Fill in the requested information, upload your certificate, and click Add. Then click the Save and Submit button. From now on, traffic between the API Gateway Cluster and the backend services is secured with TLS. For complete instructions, see the API Connectivity Manager documentation.
Planning and implementing API governance is a crucial step ensuring your API strategy is successful. By working towards a distributed model and relying on adaptive governance to address the unique requirements of different teams and APIs, you can scale and apply uniform governance without sacrificing the speed and agility that make APIs, and cloud‑native environments, so productive.
Start a 30‑day free trial of NGINX Management Suite, which includes access to API Connectivity Manager, NGINX Plus as an API gateway, and NGINX App Protect to secure your APIs.
"This blog post may reference products that are no longer available and/or no longer supported. For the most current information about available F5 NGINX products and solutions, explore our NGINX product family. NGINX is now part of F5. All previous NGINX.com links will redirect to similar NGINX content on F5.com."