API management
This article needs additional citations for verification. (January 2019) |
API management is the process of creating and publishing web application programming interfaces (APIs), enforcing their usage policies, controlling access, nurturing the subscriber community, collecting and analyzing usage statistics, and reporting on performance. API Management components provide mechanisms and tools to support developer and subscriber communities.[1]
Components
While solutions vary, components that provide the following functions are typically found in API management products:
Gateway
A server that acts as an API front-end, receives API requests, enforces throttling and security policies, passes requests to the back-end service and then passes the response back to the requester.[2] A gateway often includes a transformation engine to orchestrate and modify the requests and responses on the fly. A gateway can also provide functions such as collecting analytics data and providing caching. The gateway can provide the functionality to support authentication, authorization, security, audit and regulatory compliance.[3] Gateways can be implemented using technologies like Nginx or HAProxy.
Publishing tools
A collection of tools that API providers use to define APIs, for instance using the OpenAPI or RAML specifications, generate API documentation, govern API usage through access and usage policies for APIs, test and debug the execution of API, including security testing and automated generation of tests and test suites, deploy APIs into production, staging, and quality assurance environments, and coordinate the overall API lifecycle.
Developer portal/API store
A community site, typically branded by an API provider, that can encapsulate for API users in a single convenient source information and functionality including documentation, tutorials, sample code, software development kits, an interactive API console and sandbox to trial APIs, the ability to subscribe to the APIs and manage subscription keys such as OAuth2 Client ID and Client Secret, and obtain support from the API provider and user and community.
Reporting and analytics
Functionality to monitor API usage and load (overall hits, completed transactions, number of data objects returned, amount of compute time and other internal resources consumed, the volume of data transferred). This can include real-time monitoring of the API with alerts being raised directly or via a higher-level network management system, for instance, if the load on an API has become too great, as well as functionality to analyze historical data, such as transaction logs, to detect usage trends. Functionality can also be provided to create synthetic transactions that can be used to test the performance and behavior of API endpoints. The information gathered by the reporting and analytics functionality can be used by the API provider to optimize the API offering within an organization's overall continuous improvement process and for defining software Service-Level Agreements for APIs.
Monetization
Functionality to support charging for access to commercial APIs. This functionality can include support for setting up pricing rules, based on usage, load and functionality, issuing invoices and collecting payments including multiple types of credit card payments.
Market size
A number of industry analysts have observed that the size of the market for API management solutions has been growing rapidly since the early 2010s. Gartner estimated the size of the market for API management to be $70 million in 2013 and to be growing at 40% a year.[4] According to Forrester Research, in the US alone, annual spend on API management was $140 million in 2014, expected to grow to $660 million by 2020 with total global sales are predicted to exceed a billion dollars by that year. The most recent market analysis, conducted by KBV Research in 2019, predicted continuing CAGR of 28.4% taking the total market value to $6.2billion by 2024[5][6][7]
Bien sûr, voici une mise en contexte pour tester la latence entre une passerelle Azure auto-hébergée sur site et une passerelle sur Azure dans le cloud :
Contexte : Votre entreprise a récemment migré une partie de ses services vers Azure, profitant de l'infrastructure cloud pour améliorer la flexibilité et l'évolutivité de ses applications. Dans le cadre de cette migration, des passerelles ont été mises en place pour assurer la connectivité entre les ressources sur site et celles hébergées dans le cloud.
Cependant, pour garantir des performances optimales, il est crucial d'évaluer la latence entre la passerelle Azure auto-hébergée sur site et la passerelle Azure dans le cloud. Cette mesure de latence permettra de déterminer l'impact sur les temps de réponse des applications et de prendre des décisions informées sur la configuration de la connectivité.
Objectif : L'objectif de ce test de latence est de mesurer le temps nécessaire pour qu'un paquet de données traverse la connexion entre la passerelle auto-hébergée sur site et la passerelle Azure dans le cloud. Une faible latence est cruciale pour assurer une communication efficace et des performances optimales des applications.
Étapes du test :
Configuration des Passerelles : Assurez-vous que les passerelles sur site et dans le cloud sont correctement configurées avec les paramètres de sécurité appropriés.
Génération de Trafic de Test : Utilisez des outils de génération de trafic pour simuler des requêtes réseau entre les deux passerelles. Créez des scénarios de trafic représentatifs des conditions réelles d'utilisation.
Mesure de la Latence : Utilisez des outils de mesure de la latence, tels que des commandes ping ou des outils de diagnostic réseau Azure, pour mesurer le temps de réponse entre les passerelles. Effectuez des tests à différents moments de la journée pour prendre en compte les variations de charge réseau.
Analyse des Résultats : Collectez les données de latence et analysez-les pour identifier les éventuels pics, les tendances et les variations. Comparez les résultats avec les exigences de performance de l'entreprise.
Optimisation : Si des problèmes de latence sont identifiés, explorez des options d'optimisation telles que la mise à l'échelle des ressources, l'ajustement des configurations réseau ou l'utilisation de services de mise en cache.
Rapport : Produisez un rapport détaillé des résultats du test de latence, mettant en évidence les performances actuelles, les points faibles potentiels et les recommandations d'optimisation. Ce rapport sera essentiel pour prendre des décisions éclairées sur la configuration du réseau et garantir des performances optimales dans l'environnement hybride Azure.
References
- ^ Oracle. "An Oracle white paper - A Comprehensive Solution for API Management" (PDF). www.oracle.com. Retrieved 16 January 2019.
- ^ "The API gateway pattern versus the Direct client-to-microservice communication". Microsoft. Retrieved 16 January 2019.
- ^ "API Management Market Key Company : Microsoft, Amazon Web Services, Inc., International Business Machines Corp. is Dominating the Global Industry in 2019". 21 January 2019. Archived from the original on 1 February 2019. Retrieved 31 January 2019.
- ^ Garrett, Owen. "Standard for Containerized Applications". Archived from the original on 2018-11-30.
- ^ Heffner, Randy; Yamnitsky, Michael; Mines, Christopher; Fleming, Nate. "Sizing The Market For API Management Solutions". Forrester Research. Retrieved 23 September 2016.
- ^ Yamnitsky, Michael. "The API Management Solutions Market Will Quadruple By 2020 As Business Goes Digital". Forrester Research. Retrieved 23 September 2016.
- ^ "API Management Market Size, Share & Industry Analysis Report, 2024". KBV Research. Retrieved 2020-06-12.