A look into how we scale our APIs...
One increasingly critical aspect of any tool in our API toolboxes is whether or not we can deploy and operate it within multiple environments, or we find that we are limited to just a single on-premise or cloud location. Can your request and response web API infrastructure operate within the AWS, Google, or Azure clouds? Does it operate on-premise within your data center, locally for development, and within sandbox environments for partners and 3rd party developers? Where your APIs are deployed will have just as big of an impact on reliability and performance as your approach to design and the protocol you are using.
As the web matures, and grows in importance, regulatory and other regional level concerns may have a bigger impact on your API infrastructure, then using REST, GraphQL, Webhooks, Server-Sent Events, or Kafka. Being able to ensure you can deliver, operate, and scale APIs anywhere they are needed is fast becoming a defining characteristic of the tools that we possess in our API toolboxes. Giving us yet another consideration when it comes to developing our API toolbox, pushing us to look beyond any single way of delivering APIs, and the protocols, services, and tooling we use, and making sure we can effectively deploy our API infrastructure where it is needed the most, and where it will have the most impact.
The containerization of our API infrastructure as part of the wider microservices evolution, and the ability to orchestrate these containers with Kubernetes is changing how we design, develop, operate, and scale our API infrastructure. Allowing us to deliver APIs on any major cloud platform, on-premise, or even on device. The same approaches to delivering API resources is being used to deliver the API infrastructure that drives them. Allowing us to better deliver, operate, and scale our APIs in any environment, bringing them closer to where consumers need them. Shifting how internal, partner, and public infrastructure is delivered on a temporary or permanent basis, enabling enterprise organizations to be more nimble in how they operate their global infrastructure at scale.