An Introduction to API Management and NGINX
This article is based on the content found in this episode of Software Engineering Daily. This episode features NGINX product manager Kevin Jones. All quotes from Jones can be found in this episode’s transcript.
Setting the Stage for API Management
The term “API” is, and has been for quite some time, ubiquitous within the context of computing. Today, web-based APIs are among the most prevalent kinds of APIs. Salesforce introduced the first web-based API in early 2000, which birthed the notion of Internet-as-a-service (IaaS).
Much has changed within the arena of web infrastructure over the past twenty years. These changes have influenced the body of thought surrounding APIs, as well as how APIs are operationalized. Jones notes “a rise in services, the amounts of services, and […] an increase in various protocols being used to communicate over the Internet” as high-level changes within the scope of web infrastructure.
In addition to changes within web infrastructure, the increased popularity and number of mobile devices has drastically increased web usage. The near-simultaneous rise of microservice-based applications led to even more communication over the Internet. Jones states the resulting effects concisely: “more devices, more connections, and more requests being processed throughout the internet.”
Microservice-based architecture also increases to the number of APIs within a system. The initial wave of companies breaking apart their monoliths in order to extract services brought an inherent increase in APIs and requests being processed. Communication protocols are defined by each microservice; this is the microservice’s API. Microservices come in all different flavors. Their uses vary widely, environments may be different, needs for role-based access can change, and hardware may be located in different geographic locations.
Now, this is where the notion of API management becomes relevant. A common way to implement API management is by using NGINX, a popular and reliable reverse proxy. NGINX has an API management module that provides users with a control plane that sits on top of API gateways.
What is API Management?
API management is often discussed in tandem with API gateways. In some contexts, these two concepts are sometimes used interchangeably, though they aren’t synonymous. Technically, an API gateway is a reverse proxy that sits between an API and its consumers. API management refers to the process of maintaining a group of API gateways; an API management tool is the control plane for a collection of gateways.
Exploring the functionalities of API gateways can help shed light on the benefits offered by an API management tool. These benefits include the ability to perform authentication and rate limiting, routing, and canary rollouts. NGINX has the ability to act as a reverse proxy and assume the role of an API gateway. Not only does NGINX offer fine-grain control over all of the aforementioned benefits of API gateways, but it offers many others, as well. The configuration of NGINX is controlled by directives. Visit the NGINX directives documentation for a comprehensive list of NGINX’s potential use cases.
Note that Figure 5 is only illustrating to give an overview of the benefits of an API gateway. Assuming a microservice-based architecture is being used in this hypothetical, the API endpoints shown would most likely each belong to a number of different microservices. To that end, there would most likely be more than one API gateway; this figure is not attempting to demonstrate an archetypal pattern relating to how an API gateway fits into a system’s architecture. If you’re interested in learning more about using an API gateway to help build a microservice-based application, check out this blog post from NGINX.
Figure 5 pictures a single tier API gateway. A two-tiered gateway pattern is often used to separate responsibilities of security teams from those of SRE and DevOps teams. The idea behind this pattern is to separate high-level functionality, like security and access control, from service-dependent functionality, like routing. Jones notes another benefit to this pattern: “microservice[s] sitting behind that internal router gateway […] don’t have to go all the way back out through to the Internet and come into the DMZ again.”
There aren’t hard-and-fast rules describing the optimal API gateway pattern for a system. One could consider a service mesh as a system with an API gateway at each instance of a service, with sidecar proxies acting as gateways; this is where a tool like NGINX would be hosted. Jones notes the largest benefit of this pattern: “you can really have a fine-grained configuration all the way up into the container.” Check out this blog post from NGINX, if you’re interested in learning more about service meshes.
Software architecture is constantly changing. There are different protocols used to communicate between services, and their preferred mediums, like RPC and JSON, may change. The environments a system’s API gateways are in may change. A popular API management tool is the NGINX Controller. It’s a way to manage these gateways that’s infrastructure-agnostic, removing the burden of maintaining each gateway in its particular environment.