APIs are changing the way we build applications and changing the way we expose data, both inside and outside our organizations. Also, the success of our APIs depends on their integrity, availability, and performance. With an API Gateway such as Apache APISIX, we can achieve these indicators of success.
When it comes to the deployment of API Gateways, there are 4 well-known patterns: Centralized edge gateway, Two-tier gateway, Microgateway, and Sidecar. In this post, we will go through these patterns and give you an idea to choose the right API gateway deployment pattern for your business.
What is an API gateway?
An API gateway is a management tool that sits at the edge of a system between a consumer and a collection of backend services and acts as a single point of entry for a defined group of APIs. The consumer can be an end-user application or device, such as a single-page web application or a mobile app, another internal system, or a third-party application or system.
API gateway deployment components
An API gateway is implemented with two high-level fundamental components: a control plane and a data plane. These components can typically be packaged together or deployed separately. The control plane is where operators interact with the gateway and define routes, policies, and required telemetry. The data plane is the location where all of the work specified in the control plane occurs, the network packets are routed, the policies enforced, and telemetry emitted. For example, APISIX has three different deployment modes (traditional, decoupled and standalone) for different production use cases.
Centralized edge gateway
An API gateway is typically deployed at the edge of a system, but the definition of “system” in this case can be quite flexible. For startups and many small-medium businesses, an API gateway will often be deployed at the edge of the data center or the cloud. In these situations, there may only be a single API gateway (deployed and running via multiple instances for high availability) that acts as the front door for the entire backend estate, and the API gateway will provide all of the edge functionality.
An API gateway provides cross-cutting requirements such as user authentication, authorization, request rate limiting, caching, timeouts/retries, request/response transformation, can provide metrics, logs, and trace data in order to support the implementation of observability within the system.
Also, many API gateways provide additional features that enable developers to manage the lifecycle of an API, assist with the onboarding and management of developers using the APIs (such as providing a developer portal and related account administration and access control), and provide enterprise governance.
For large organizations and enterprises, an API gateway will typically be deployed in multiple locations, often as part of the initial edge stack at the perimeter of a data center, and additional gateways may be deployed as part of each product, line of business, or organizational department. In this context, these gateways would more typically be separate implementations and may offer different functionality depending on geographical location (required governance) or infrastructure capabilities (running on low-powered edge compute resources).
As the below diagram shows how Apache APISIX API gateway often sits between the public internet and the demilitarized zone (DMZ) of a private network.
Microgateways are designed entirely for internal communication between microservices. Each individual microgateway may have a different set of policies, and security rules, and require aggregation of monitoring and metrics from multiple services.
The concept is to provide the capability (a dedicated gateway) to the individual team managing the microservices to control how they are going to securely expose the services. The same developer team will manage and maintain their microservices and microgateways, so they can fix bugs, provide updates, perform improvements independently, and quickly push the changes to the production with less interaction with other dependencies and without impacting other applications in the deployment.
Sidecar API gateway
Sidecar implements an API gateway as a container attached to a service in an independent runtime, such as Kubernetes. Sidecar is a pattern that corresponds to a sidecar attached to a motorcycle, similarly, it is attached to a parent application (a software component called service mesh) and provides supporting features for the application. The sidecar also shares the same lifecycle as the parent application, is created and retired alongside the parent, and introduces additional features such as monitoring, logging, configuration, and networking services.
The benefits of adopting this pattern are that each service runtime can configure its own API gateway in the best way. Because the requirement to enable the API gateway functionalities and setups can vary from service to service. At the same time, it separates concerns if an issue occurs in the shared API gateway infrastructure then all services are not impacted. For example, Amesh is another service mesh solution based on Apache APISIX.
The preceding diagram illustrates an ingress acting as an API load balancer and resource router into each service endpoint. The entry point for the service is not the service endpoint itself but rather a sidecar API gateway. The sidecar can then perform any of the capabilities offered by the API gateway in addition to routing traffic to the service endpoint.
As we understand, there is no single deployment pattern that is suitable for all conditions. Sometimes you can use one or multiple gateways in your system. The choice of deployment depends on the complexity and needs of your business. If you need help deciding which deployment pattern would be best for you, you can join our community Slack channel and experts help you make a decision.
Recommended content 💁
➔ Read the blog posts: