Acceleration system for assisting API call processing

文档序号:1432477 发布日期:2020-03-17 浏览:7次 中文

阅读说明:本技术 协助api调用处理的加速系统 (Acceleration system for assisting API call processing ) 是由 奥利维尔·吉恩·普瓦特雷 于 2018-06-19 设计创作,主要内容包括:一个实施例包括加速系统,该加速系统作为API处理系统和客户端之间的中介进行操作以减少API调用往返延迟。加速系统是互连系统的网络,其中这些互连系统分布在全球各地。给定的加速系统与给定的客户端建立网络连接,并通过该连接接收用于处理API调用的请求。与API调用相关联的编程功能被配置在API处理系统中。加速系统通过与API处理系统已建立的连接来协助API调用的处理。(One embodiment includes an acceleration system that operates as an intermediary between an API processing system and a client to reduce API call round-trip latency. An acceleration system is a network of interconnected systems, where these interconnected systems are distributed around the globe. A given acceleration system establishes a network connection with a given client and receives a request to process an API call over the connection. The programming functions associated with the API calls are configured in the API processing system. The acceleration system assists in the processing of the API calls through the established connection with the API processing system.)

1. A method, comprising:

receiving, from a client device, a boot request associated with an Application Programming Interface (API) call; and

in response to receiving the boot request:

selecting an acceleration system that operates as an intermediary between the client device and an API processing system while enabling the API call to be processed based on reachability selection criteria, and

routing the client device to the acceleration system.

2. The method of claim 1, wherein selecting the acceleration system based on the reachability selection criteria comprises: determining that the acceleration system is accessible to the client device.

3. The method of claim 2, wherein the acceleration system is embedded within an internet service provider and is accessible only by client devices associated with the internet service provider.

4. The method of claim 1, wherein the acceleration system further enables a previous API call to be processed, and selecting the acceleration system is further based on a delay associated with processing the previous API call.

5. The method of claim 1, further comprising:

receiving measurement data associated with processing a second API call and specifying a unique identifier associated with the client device;

calculating a delay based on the measurement data, the delay representing a total time from when a connection between the client device and the acceleration system is initiated to when the client device receives a response associated with the second API call;

receiving a request from a resolver comprising a resolver IP address and the unique identifier; and

matching a unique identifier included in the request with a unique identifier specified by the measurement data to determine that the delay is associated with the resolver IP address.

6. The method of claim 5, further comprising:

receiving a second bootstrap request associated with a third API call from the client device, wherein the second bootstrap request includes the resolver IP address; and

selecting the acceleration system that operates as an intermediary between the client device and the API processing system while enabling the third API call to be processed based on a delay associated with the resolver IP address.

7. The method of claim 6, wherein the resolver IP address is associated with an Internet Service Provider (ISP) through which the client device accesses internet services, and wherein the resolver IP address is different from a client IP address associated with the client device.

8. The method of claim 1, wherein selecting the acceleration system is further based on a level of congestion within a communication connection between the acceleration system and the API processing system.

9. The method of claim 1, further comprising receiving measurement data from the client device that is generated from one or more measurement operations associated with the acceleration system.

10. A computer-readable medium storing instructions that, when executed by a processor, cause the processor to:

receiving, from a client device, a boot request associated with an Application Programming Interface (API) call; and

in response to receiving the boot request:

selecting an acceleration system that operates as an intermediary between the client device and an API processing system while enabling the API call to be processed based on reachability selection criteria, and

routing the client device to the acceleration system.

11. The computer-readable medium of claim 10, wherein the instructions, when executed by the processor, further cause the processor to select the acceleration system based on the reachability selection criteria by: determining that the acceleration system is accessible to the client device.

12. The computer-readable medium of claim 11, wherein the acceleration system is embedded within an internet service provider and is accessible only by client devices associated with the internet service provider.

13. The computer-readable medium of claim 10, wherein the acceleration system is embedded within an internet exchange point and is accessible only by client devices that can be routed to the internet exchange point.

14. The computer-readable medium of claim 10, wherein the instructions, when executed by the processor, further cause the processor to:

receiving measurement data associated with processing a second API call and specifying a unique identifier associated with the client device;

calculating a delay based on the measurement data, the delay representing a total time from when a connection between the client device and the acceleration system is initiated to when the client device receives a response associated with the second API call;

receiving a request from a resolver comprising a resolver IP address and the unique identifier; and

matching a unique identifier included in the request with a unique identifier specified by the measurement data to determine that the delay is associated with the resolver IP address.

15. The computer-readable medium of claim 14, wherein the instructions, when executed by the processor, further cause the processor to:

receiving a second bootstrap request associated with a third API call from the client device, wherein the second bootstrap request includes the resolver IP address; and

selecting the acceleration system that operates as an intermediary between the client device and the API processing system while enabling the third API call to be processed based on a delay associated with the resolver IP address.

16. The computer readable medium of claim 10, wherein the instructions, when executed by the processor, further cause the processor to select the acceleration system based on a delay criterion.

17. The computer readable medium of claim 16, wherein the instructions, when executed by the processor, cause the processor to select the acceleration system based on the delay criteria by: determining a delay associated with one or more previous API calls received from the client device and processed by the acceleration system.

18. The computer readable medium of claim 17, wherein the instructions, when executed by the processor, cause the processor to process measurement data received from the client device to measure the delay.

19. A computing environment, comprising:

a plurality of acceleration systems, wherein each acceleration system operates as an intermediary between a client device and an API processing system and simultaneously enables Application Programming Interface (API) calls to be processed; and

a client boot system configured to route the client device to one of the plurality of acceleration systems based on a set of selection criteria, wherein the client boot system comprises a selection engine configured to:

receiving a boot request associated with an API call from the client device, an

In response to receiving the boot request:

selecting an acceleration system for enabling the API call to be processed from the plurality of acceleration systems based on reachability selection criteria, an

Routing the client device to the acceleration system.

20. The computing environment of claim 18, wherein the client device performs a transport control protocol handshake and a transport layer security handshake with the acceleration system in response to being routed to the acceleration system.

21. The computing environment of claim 19, wherein, after the TCP handshake and TLS handshake, the client device sends the API call to the acceleration system, and the acceleration system forwards the API call to the API processing system.

Technical Field

The present invention relates generally to cloud-based computing, and more particularly to an acceleration system for facilitating Application Programming Interface (API) call processing.

Background

Many internet-based services are hosted on cloud-based systems. Cloud-based systems typically include geographically distributed servers such that clients of services hosted on the cloud-based system are routed to the nearest server of the cloud-based system. In some cases, even the closest server in a cloud-based system is quite far from the client.

Generally, the farther a client is from a server to which it is routed, the slower the communication round trip between the server and the client and the higher the communication delay. Furthermore, in order to establish a communication connection with the server, the client must perform several communication handshakes, such as Transmission Control Protocol (TCP) handshakes. In addition, the client performs a Transport Layer Security (TLS) handshake with the server to establish a secure communication session. The TLS handshake typically makes two round trips between the client and the server.

The further the client is from the server to which it is routed, the longer it takes to perform these handshakes and thus establish a connection between the client and the server. Thus, where the client is located a considerable distance from the server, the functionality of accessing internet-based services through a cloud-based system may be very slow and result in a poor user experience.

Disclosure of Invention

One embodiment of the present invention sets forth a method comprising a method for booting a client device to an appropriate API edge gateway. The method includes receiving a boot request associated with an Application Programming Interface (API) call from a client device. The method further comprises the following steps: in response to receiving the bootstrap request, an acceleration system is selected based on reachability selection criteria and the client device is routed to the acceleration system, where the acceleration system operates as an intermediary between the client device and the API processing system while enabling API calls to be processed.

One advantage of the disclosed method is that the round-trip time for processing API calls is reduced when the acceleration system operates as an intermediary between the client device and the API processing system. In particular, any round trip time required to establish a communication connection between the client device and the acceleration system is short relative to the case where a connection needs to be established between the client device and the API processing system.

Drawings

FIG. 1 illustrates a system environment configured to implement one or more aspects of the present invention.

FIG. 2 is an interaction diagram illustrating interactions between the components of FIG. 1, according to one embodiment of the invention.

FIG. 3 illustrates a boot system environment configured to implement one or more aspects of the present invention.

FIG. 4 is an interaction diagram illustrating interactions between the components of FIG. 3 using unique identifiers, according to one embodiment of the invention.

Fig. 5 is a flow diagram of method steps for bootstrapping a client to an API access endpoint according to another embodiment of the invention.

Detailed Description

In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the present invention.

FIG. 1 illustrates a system environment 100 configured to implement one or more aspects of the present invention. As shown, system environment 100 includes an API processing system 102, clients 104(0) -104(N) (collectively referred to as "clients 104" and each as "client 104"), acceleration systems 106(0) -106(N) (collectively referred to as "acceleration systems 106" and each as "acceleration systems 106").

The API processing system 102, the acceleration system 104, and the client 102 communicate over a communication network (not shown). The communication network includes a plurality of network communication systems, such as routers and switches, configured to facilitate data communication. Those skilled in the art will recognize that there are many technically feasible techniques for constructing a communication network, including techniques practiced in deploying the well-known internet communication networks.

API processing system 102 comprises a network of interconnected nodes that are distributed worldwide and that receive, transmit, process, and/or store data associated with system environment 100. The interconnect nodes may include any suitable combination of software, firmware, and hardware for performing these desired functions. In particular, API processing system 102 includes multiple computing devices that may be co-located or physically distributed with one another. For example, these computing devices may include one or more general purpose PCs, Macintosh, workstations, Linux-based computers, server computers, one or more server pools, or any other suitable device. The computing device stores and executes one or more programs that are remotely accessible through a corresponding Application Programming Interface (API). In some embodiments, API processing system 102 provides computing resources to external entities on a fee. Such entities configure portions of API processing system 102 and clients of those entities access the configured portions of API processing system 102 to perform operations associated with the entities.

Client 104 includes one or more computer systems at one or more physical locations. Each computer system may include any suitable input device (e.g., a keypad, touch screen, mouse, or other device that can accept information), output device, mass storage media, or other suitable component for receiving, processing, storing, and transmitting data. Both the input device and the output device may include fixed or removable storage media such as magnetic computer disks, CD-ROMs, for example. Each computer system may include a personal computer, a workstation, a network computer, a kiosk (kiosk), a wireless data port, a tablet computer, one or more processors within these or other devices, or any other suitable processing device.

Each client 104 may include a computer system, a set-top box, a mobile device such as a mobile phone, or any other technically feasible computing platform with network connectivity. In one embodiment, client 104 is coupled to or includes a display device and a speaker device for rendering video content and generating sound output, respectively. Each client 104 includes computer hardware and/or computer software that relies on API processing system 102 for certain operations.

In particular, the client 104 executes one or more cloud-based applications that communicate with the API processing system 102 over a communication network to perform various operations. In one embodiment, the cloud-based application operates by issuing a request to a portion of the API processing system 102, where the portion of the API processing system 102 is configured with the processing infrastructure required to process the request. In response to receiving the request, the API processing system 102 processes the request and generates output data that is sent back to the cloud-based application. Such a round trip between a cloud-based application executing on a client 104 and a remote server is referred to as an API call round trip. Generally, the farther a client 104 is from the portion of API processing system 102, the higher the delay of API call roundtrips. Further, the higher the congestion of the portion of API processing system 102 processing the request, the higher the delay of API call round trips.

The acceleration system 106 operates as an intermediary between the API processing system 102 and the client 104 to reduce API call round-trip delay. The acceleration system 106 includes a network of interconnected systems that are distributed around the world, and each of which operates as an intermediary between the client 104 and the API processing system 102. A given acceleration system 106 establishes a network connection with a given client 104 and receives requests for processing API calls over that connection. The programming functions associated with the API calls are configured in the API processing system 102. The acceleration system 106 facilitates the processing of API calls through a connection with the API processing system 102.

When the acceleration system 106 operates as an intermediary between the API processing system 102 and the client 104, the API call round trip time is reduced for at least two reasons. First, in some embodiments, the acceleration system 106 is generally physically closer to the client 104 than the API processing system 102. Thus, any round trip time required to establish a communication connection between the client 104 and the acceleration system 106 is short relative to a situation in which a connection needs to be established between the client 104 and the API processing system 102. Second, in some embodiments, because the acceleration system 106 has a large number of requests originating from multiple clients 104, the acceleration system 106 has connections that are coherent and already established with the API processing system 102. Thus, a connection to the API processing system 102 need not be established for each API call.

FIG. 2 is an interaction diagram illustrating interactions between the components of FIG. 1, according to one embodiment of the invention. In particular, the client 104 and the acceleration system 106 perform 202 a Transmission Control Protocol (TCP) handshake. The TCP handshake is the mechanism by which the client 104 and acceleration system 106 negotiate and begin a TCP communication session for communicating with each other. The client 104 and the acceleration system 106 perform 204 a Transport Layer Security (TLS) handshake. The TLS handshake is the mechanism by which the client 104 and acceleration system 106 exchange the security keys required to establish a secure communication session.

Once the secure communication session is established, client 104 sends 206 a hypertext transfer protocol (HTTP) request over the established connection to process the given API call. The acceleration system 106 forwards 208 the HTTP request for processing the API call to the API processing system 102. In one embodiment, the acceleration system 106 manages multiple HTTP requests, which are in turn forwarded to the API processing system 102. To manage the transmission and/or sequencing of these requests, the acceleration system 106 multiplexes those requests using HTTP/2. The API processing system 102 processes the API call and sends 210 the processing result to the acceleration system 106 in the form of an HTTP response. The acceleration system 106 forwards 212 the HTTP response to the client 104.

The duration from the beginning of the TCP handshake to the receipt of the HTTP response by client 104 is the API call round trip time. In one embodiment, the API call round-trip time is lower than in an implementation in which the client 104 communicates directly with the API processing system 102. The API call round-trip time is low, in part because of the low latency of communications between the client 104 and the acceleration system 106 when performing TCP and TLS handshakes.

FIG. 3 illustrates a boot system environment 300 configured to implement one or more aspects of the present invention. System environment 300 includes API edge gateway 302, client 304, measurement system 306, and client bootstrap system 308.

The API edge gateway 302 includes different systems that can be accessed by the client 304 to process a given API call. In the illustrated embodiment, the API edge gateway 302 includes an embedded acceleration system 320 (each referred to as an "acceleration system 320"), an internet exchange (IX) acceleration system 318 (each referred to as an "acceleration system 322"), and the API processing system 102 of fig. 1.

The embedded acceleration system 320 and the IX acceleration system 322 comprise many instances of the acceleration system 106 that are geographically distributed and, together with the API processing system 102, assist in the processing of API calls. Each embedded acceleration system 320 is an acceleration system 106 embedded within a network associated with an ISP. In one embodiment, because the acceleration system 320 is internal to the ISP, the acceleration system 320 is only accessible by clients associated with and/or subscribing to the ISP. Each of the IX acceleration systems 322 operates within or in association with an internet exchange point and is independent of the ISP's acceleration system 106. An internet exchange point is the physical infrastructure through which ISPs and Content Delivery Networks (CDNs) exchange internet traffic.

Measurement system 306 monitors interactions between clients (e.g., client 304) and API edge gateway 302 to measure delays between different clients or client groups and different API edge gateways 302. The bootstrap system 308 boots a client (e.g., client 304) to one of the API edge gateways 302 (i.e., one of the embedded acceleration systems 320, one of the IX acceleration systems 322, or the API processing system 102) to process the API call based on the delay measured by the measurement system. In this manner, API calls from a client are processed by API edge gateway 302, which API edge gateway 302, based on past delay measurements, may be associated with the lowest delay for that client.

The following discussion provides details on how measurement system 306 measures the delay between client 304 and API edge gateway 302. This discussion also provides details on how client bootstrap system 308 uses the measured delay to bootstrap client 304 to the appropriate API edge gateway 302.

The client 304 includes a probing module 310 for enabling the measurement system 306 to monitor interactions between the client 304 and the API edge gateway 302. The probe module 310 queries the monitoring API endpoint to request a list of Unique Resource Locators (URLs) associated with different API edge gateways 302 to be monitored. Each URL in the list has a given name that corresponds to the API edge gateway 302 associated with that URL. The response from the monitoring API endpoint includes a list of URLs and a set of parameters that control the measurement process. These parameters include a wait parameter that specifies the length of time the probing module 310 should wait to begin another measurement process after completing a given measurement process. These parameters also include a pulse parameter specifying a number of requests to be performed for each provided URL during the measurement process, a pulse interval parameter specifying a length of time to wait between each request for a provided URL, and a pulse timeout parameter specifying a maximum length of time to wait for requests for a provided URL to complete. In one embodiment, the URLs provided in the list that are returned to the probing module 310 are associated with an expiration.

During the measurement process, the probing module 310 collects a measurement data set associated with each request for a URL provided by a monitoring API endpoint. The measurement data includes, but is not limited to, the total duration of the request, the length of time it takes to establish the TCP connection, the length of time it takes to perform the TLS handshake, the length of time to resolve the hostname associated with the URL, the time to the first byte (i.e., the time from the start of the request to the receipt of the first byte in response to the request), the HTTP status code associated with the response to the request, and the payload size received in response to the request. In addition to the parameters, the detection module 310 collects any intermediary systems between the API endpoint associated with the URL and the client 304. These intermediary systems include acceleration systems 320 and 322 or API hosting services. The probing module 310 sends the collected measurement data associated with each request issued during the measurement process to the measurement system 306.

In one embodiment, client 304 is configured with an HTTP keep-alive state so that subsequent requests for the same URL can reuse the connection after it is established. In such an embodiment, the length of the measurement parameters for subsequent requests may be shorter than the first request in which the connection is established first. In one embodiment, the probing module 310 resets established connections within the same measurement process and/or between different requests across two measurement processes.

Measurement system 306 includes a mapping engine 312 and a measurement store 314. Measurement system 306 stores measurement data received from different clients, including client 304, in measurement storage 314 for further processing. The mapping engine 312 generates a mapping between the set of clients and one of the API edge gateways 302 (i.e., one of the embedded acceleration systems 320, one of the IX acceleration systems 322, or the API processing system 102). A given API edge gateway 302 is best suited to handle API calls issued by a set of clients based on delay criteria and reachability criteria.

With respect to the delay criteria, the mapping engine 312 takes into account the API call round trip time (also referred to as "delay") captured by the measurement data stored in the measurement storage 314. In one embodiment, the delay represents the total time it takes to complete a request or set of requests associated with a given URL during the measurement process. The time represented begins when a connection between the client and the API edge gateway 302 associated with the URL is initiated and ends when the client receives a response associated with the API call. In one embodiment, for a given set of clients, the mapping engine 312 scores each of the set of API edge gateways 302 based on measurement data stored in the measurement store 314. The score for a given API edge gateway 302 may be based on the median delay in processing API calls issued by a given client or set of clients.

With respect to reachability criteria, the mapping engine 312 maps the set of clients to only those acceleration systems 320 or 322 that are accessible to those clients. As described above, because the embedded acceleration system 320 is internal to the ISP, the embedded acceleration system 320 is only accessible by clients associated with and/or subscribing to the ISP. Thus, the mapping engine 312 maps a set of clients to a given embedded acceleration system 320 only when the set of clients is associated with and/or subscribed to an ISP in which the embedded acceleration system 320 is embedded. Similarly, since the IX acceleration system 322 is internal to an internet exchange point, the IX acceleration system 322 is only accessible by clients that have access to the internet exchange point. Thus, mapping engine 312 maps a set of clients to a given IX acceleration system 322 only when the clients 304 have access to an internet exchange point that includes that IX acceleration system 322.

The mapping engine 312 generates gateway mappings based on the determined mappings between the set of clients and the respective API edge gateways 302 that are best suited to handle API calls issued by those set of clients. The mapping engine 312 sends the gateway mapping to the client boot system 308 to perform client boot in response to a boot request from the client. The gateway map stores key-gateway pairs, where the keys in the key-gateway pairs identify one or more clients and the gateways in the key-gateway pairs identify the API edge gateways 302 that are best suited to handle API calls made by the set of clients. In one embodiment, the key in a key-gateway pair is the IP address associated with a given client. The IP address associated with a given client is determined based on the measurement data stored in measurement storage 314.

In some cases, the bootstrap request received by client bootstrap system 308 does not include the IP address of the client, but instead includes the IP address of the resolver associated with the ISP through which the client accesses the internet. To enable the client bootstrap system 308 to use gateway mapping in this case, the key in the key-gateway pair should be the resolver IP associated with a set of clients that access the internet through the ISP associated with the resolver. Since the measurement data received from different clients specifies a client IP address instead of a resolver IP address, the mapping engine 312 implements a correlation technique to correlate the measurement data and the delay computed therefrom with the resolver IP address.

FIG. 4 is an interaction diagram of interactions between the components of FIG. 3 using unique identifiers, according to one embodiment of the invention. In particular, client 304 sends 402 a resolution request to resolver 400 that includes a hostname and a unique identifier associated with client 304. Resolver 400 is associated with the ISP through which client 304 accesses the internet. Resolver 400 resolves the hostname and thus redirects 404 client 304 to measurement system 306. In the redirection process, the request to measurement system 306 includes the IP address of the resolver. Measurement system 306 records 406 a relationship between the resolver IP address and the unique identifier in measurement storage 314.

Client 304 sends 412 an API call request to API edge gateway 302. The API call request includes a unique identifier and a client IP address associated with client 304 that is different from the resolver IP address. The API edge gateway 302 optionally processes or facilitates the processing of the API call and sends 414 an API response to the client 304. The API edge gateway 302 also records 416 in the measurement store 314 measurement data associated with the processing of the API call. The measurement data specifies a client IP address and a unique ID.

As described above, the measurement engine 312 processes the received measurement data to calculate the delay associated with processing the API call. Further, the measurement engine 312 determines that the delay is associated with the resolver ID by matching the unique ID recorded in association with the resolver IP address with the unique ID specified by the recorded measurement data. In this way, a delay determined based on measurement data specifying a client IP address can be associated with a resolver IP address even if the measurement data does not include the client IP address.

Returning to FIG. 3, for a given API call, the client bootstrap system 308 directs the client 304 to one of the API edge gateways 302 (i.e., one of the embedded acceleration systems 320, one of the IX acceleration systems 322, or the API processing system 102) to process the API call. To perform this boot function, the client boot system includes a selection engine 316 and a gateway map 318 received from the measurement system 306.

The selection engine 316 receives boot requests for API call processing from the client 304. For ease of discussion, the following describes how selection engine 316 handles a given boot request received from client 304 and associated with a given API call. In one embodiment, the boot request includes an Internet Protocol (IP) address associated with the client device. In an alternative embodiment, the bootstrap request includes the IP address of the resolver of the ISP through which the client device accesses the internet.

In response to a boot request from client 304, selection engine 316 selects one of API edge gateways 302 for processing the API call. In particular, the selection engine 316 routes the client 304 to one of the embedded acceleration systems 320, one of the IX acceleration systems 322, or the API processing system 102. The selection engine 316 utilizes the gateway mapping 318 to identify an appropriate acceleration system from the embedded acceleration system 320 and the IX acceleration system 322 for processing the API call. In particular, the selection engine 316 matches the IP address included in the boot request with the IP address in the key-gateway pair in the gateway mapping 318. Selection engine 316 then selects the gateway identified in the key-gateway pair as API edge gateway 302 to which client 304 is directed. In one embodiment, if an appropriate acceleration system cannot be identified based on the selection criteria, selection engine 316 routes client 304 directly to API processing system 102 to process the API call.

In one embodiment, in addition to the gateway mapping, the selection engine 316 monitors each of the embedded acceleration system 320 and the IX acceleration system 322 to determine the current load on the acceleration system. The selection engine 316 monitors various aspects of the acceleration system 320 or 322 to determine its current load. These aspects include, but are not limited to, the number of active sessions with the client and the number of API calls being assisted by the API processing system. In addition, the selection engine 316 can also monitor the amount of processing resources being used by the acceleration system 320 or 322, the amount of memory resources being used by the acceleration system 320 or 322, and the level of congestion on the communication channel between the acceleration system 320 or 322 and the API processing system 102.

As described above, each of the acceleration systems 320 and 322 operates as an intermediary between many different clients and the API processing system 102. Thus, the load on the acceleration system 320 or 322 varies depending on the number of API calls the acceleration system is assisting at any given time. When the determined load on the acceleration system 320 or 322 exceeds a certain threshold, the selection engine 316 may defer selection of the acceleration system to assist in processing any other API calls until the load falls below the threshold.

FIG. 5 is a flowchart of method steps for booting a client to an API edge gateway, according to another embodiment of the invention. Although the method steps are described in conjunction with the systems of fig. 1 and 3, those skilled in the art will appreciate that any system configured to perform the method steps in any order is within the scope of the present invention.

Method 500 begins at step 502, where client boot system 308 receives a boot request from client 304 for processing an API call. The bootstrap request includes an internet protocol address associated with the client or a resolver associated with an ISP through which the client accesses the internet.

At step 504, the boot system 308 identifies a subset of acceleration systems accessible to the client. As noted above, in some cases, some acceleration systems are only accessible to clients that have access to the internet exchange point or ISP in which the acceleration system is embedded. Because the ability of client 304 to access the acceleration systems is a necessary aspect of the acceleration systems that operate as an intermediary between client 304 and API processing system 102, boot system 308 selects only those acceleration systems that are accessible to client 304.

At step 506, boot system 308 determines measurement data associated with client 304 based on the gateway mapping received from measurement system 306. The measurement data represents an API call round trip time of a previous API call made by the client or one or more other clients to be booted with the given client. At step 508, the guidance system 308 selects an acceleration system from the subset identified at step 506 based on the measurement data. The guidance system 308 selects an acceleration system based on the previously measured delays.

At step 510, the boot system 308 routes the client 304 to the selected acceleration system to process the API call. In response, the client 304 sends a request to process the API call to the selected acceleration system, and the acceleration system assists the API processing system 102 in processing the API call.

In summary, the acceleration system operates as an intermediary between the API processing system and the client to reduce API call round-trip delay. The acceleration system includes a network of interconnected systems that are distributed around the globe, and each system operates as an intermediary between clients and API processing systems. A given client establishes a network connection with a given acceleration system and sends a request to process an API call over that connection. The programming functions associated with the API calls are configured in the API processing system. The acceleration system assists in the processing of the API calls through a previously established connection with the API processing system.

Advantageously, the round trip time for processing API calls is reduced when the acceleration system operates as an intermediary between the client device and the API processing system. In particular, any round trip time required to establish a communication connection between the client device and the acceleration system is short relative to the case where a connection needs to be established between the client device and the API processing system.

While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. For example, aspects of the present invention may be implemented in hardware or software, or in a combination of hardware and software. One embodiment of the invention may be implemented as a program product for use with a computer system. The program(s) of the program product define functions of the embodiments (including the methods described herein) and can be contained on a variety of computer-readable storage media. Illustrative computer readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, flash memory, ROM chips, or any type of solid-state non-volatile semiconductor memory) that permanently store information; (ii) writable storage media (e.g., floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) store alterable information. Such computer-readable storage media, when carrying computer-readable instructions that direct the functions of the present invention, are embodiments of the present invention.

In view of the foregoing, the scope of the invention is to be determined by the appended claims.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:通信网络

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类