Flow control method, device, equipment and storage medium

文档序号:195932 发布日期:2021-11-02 浏览:24次 中文

阅读说明:本技术 流量控制方法、装置、设备及存储介质 (Flow control method, device, equipment and storage medium ) 是由 蒋小波 蒋宁 曾琳铖曦 吴海英 黄浩 于 2021-07-28 设计创作,主要内容包括:本申请提供一种流量控制方法、装置、设备及存储介质,该方法包括:负载均衡设备在接收到业务请求后,确定业务请求对应的目标流量,负载均衡设备根据多个业务服务器分别对应的已发送业务请求流量,确定对应的已发送业务请求流量最小的业务服务器为目标业务服务器,负载均衡设备向目标业务服务器发送业务请求。本申请能够准确地对业务服务器进行流量控制,进而更加准确地实现业务服务器的负载均衡。(The application provides a flow control method, a flow control device, flow control equipment and a storage medium, wherein the method comprises the following steps: the method comprises the steps that after receiving a service request, a load balancing device determines a target flow corresponding to the service request, the load balancing device determines a service server with the minimum corresponding sent service request flow as a target service server according to the sent service request flows corresponding to a plurality of service servers respectively, and the load balancing device sends the service request to the target service server. The method and the device can accurately control the flow of the service server, and further more accurately realize the load balance of the service server.)

1. A flow control method is applied to a service system, wherein the service system comprises a load balancing device and a plurality of service servers, and the flow control method comprises the following steps:

after receiving a service request, the load balancing equipment determines a target flow corresponding to the service request;

the load balancing equipment determines a service server with the minimum corresponding sent service request flow as a target service server according to the sent service request flows corresponding to the service servers respectively, wherein the sent service request flow is a total flow corresponding to a service request sent to the service server within a preset time period;

and the load balancing equipment sends the service request to the target service server.

2. The traffic control method according to claim 1, wherein the determining, by the load balancing device, the service server with the minimum traffic request traffic to be sent as the target service server according to the traffic request traffic to be sent respectively corresponding to the plurality of service servers includes:

if a preset state change event of the service system is monitored, the load balancing equipment acquires sent service request flows corresponding to the service servers respectively, the load balancing equipment determines the service server with the minimum corresponding sent service request flow as a target service server according to the sent service request flows corresponding to the service servers respectively, and the preset state change event at least comprises a service server establishment or a service server restart; alternatively, the first and second electrodes may be,

if the sent service request flow of one of the service servers changes, the load balancing device obtains the sent service request flows corresponding to the service servers respectively, and the load balancing device determines the service server with the minimum corresponding sent service request flow as the target service server according to the sent service request flows corresponding to the service servers respectively.

3. The flow control method according to claim 2, wherein, if it is monitored that a preset state change event occurs in the service system, the obtaining, by the load balancing device, the sent service request flows corresponding to the plurality of service servers respectively comprises:

if the service system is monitored to have a preset state change event, the load balancing equipment compares the service server information obtained according to a preset interface with the service server information running in the memory;

if the new service server is added, the load balancing equipment acquires the sent service request flow corresponding to the new service server by initializing preset parameters;

if the service server exists, the load balancing device imports the operation parameters in the corresponding memory into the existing service server, and acquires the sent service request flow corresponding to the existing service server.

4. The flow control method according to claim 3, wherein the preset parameter includes a flow ratio, and if the new service server is found, the load balancing device obtains the sent service request flow corresponding to the new service server by initializing the preset parameter, including:

if the value of the flow proportion corresponding to the newly added service server is a first preset value, the sent service request flow corresponding to the newly added service server is the maximum value of the sent service request flows of all the service servers; alternatively, the first and second electrodes may be,

and if the value of the flow proportion corresponding to the newly added service server is a second preset value, the sent service request flow corresponding to the newly added service server is the sum of the sent service request flows corresponding to all the service servers.

5. The traffic control method according to claim 2, wherein the determining, by the load balancing device, the service server with the minimum traffic request traffic to be sent as the target service server according to the traffic request traffic to be sent respectively corresponding to the plurality of service servers includes:

if the sent service request flows corresponding to the service servers are all the third preset values, the load balancing equipment determines that one service server randomly selected from the service servers is the target service server; alternatively, the first and second electrodes may be,

and if the sent service request traffic corresponding to each of the plurality of service servers is not all the third preset value, the load balancing device determines the service server with the minimum sent service request traffic as the target service server.

6. The traffic control method according to any one of claims 1 to 5, wherein the load balancing device, after determining, among the plurality of traffic servers, that the corresponding traffic server with the smallest sent traffic request traffic is the target traffic server, further comprises:

the load balancing equipment stores the identification of the target service server to a forwarding queue;

the sending, by the load balancing device, the service request to the target service server includes:

and the load balancing equipment acquires the identifier of the target service server from the forwarding queue and sends the service request to the target service server according to the identifier of the target service server.

7. The flow control method according to any one of claims 1 to 5, characterized by further comprising:

and if the traffic server is configured with the flow control rule, the load balancing equipment determines the sent traffic request flow of the corresponding traffic server according to the flow control rule.

8. The traffic control method according to claim 7, wherein the traffic control rule is a slow-start traffic server traffic control rule, and if the traffic server configures the traffic control rule, the load balancing device determines the sent traffic request traffic of the corresponding traffic server according to the traffic control rule, including:

if the business server is configured with the flow control rule of the slow-start business server, the load balancing equipment controls the business server to realize slow start according to the flow control rule of the slow-start business server, and determines the sent business request flow of the corresponding business server.

9. The flow control method according to claim 8, characterized by further comprising:

if a traffic server is configured with a flow control rule of a slow-start traffic server and the current flow of the traffic server indicates the speed-limiting flow, determining the sent traffic request flow of the traffic server as the current flow; alternatively, the first and second electrodes may be,

if the business server is configured with a flow control rule of the slow-start business server, and the value of the flow proportion indicates the speed-limited flow, the sent business request flow of the business server is determined to be the sum of the sent business request flows corresponding to the business servers which do not carry out the speed-limited flow, or the sent business request flow of the business server is determined to be the sum of the sent business request flows corresponding to all the business servers.

10. A flow control apparatus applied to a service system including a load balancing device and a plurality of service servers, the flow control apparatus comprising:

the first determining module is used for determining a target flow corresponding to a service request after receiving the service request;

a second determining module, configured to determine, according to sent service request traffic corresponding to each of the plurality of service servers, that a corresponding service server with the smallest sent service request traffic is a target service server, where the sent service request traffic is a total traffic corresponding to a service request sent to the service server within a preset time period;

and the processing module is used for sending the service request to the target service server.

11. An electronic device, comprising: a memory and a processor;

the memory is to store program instructions;

the processor is configured to call program instructions in the memory to perform the flow control method of any of claims 1 to 9.

12. A computer-readable storage medium having computer program instructions stored therein which, when executed, implement a flow control method as claimed in any one of claims 1 to 9.

13. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, carries out the flow control method according to any one of claims 1 to 9.

Technical Field

The present application relates to the technical field of cloud platforms, and in particular, to a flow control method, apparatus, device, and storage medium.

Background

The container orchestration engine (kubernets, k8s) is an open source application for managing containerization on multiple hosts in a cloud platform, the goal of k8s is to make deploying containerization applications simple and efficient, and k8s provides a mechanism for application deployment, planning, updating, and maintenance.

Currently, in the k8s environment, the forwarding number is generally allocated according to the weight value of the traffic server, so as to achieve load balancing. However, the above method cannot accurately realize load balancing of the service server.

Disclosure of Invention

The application provides a flow control method, a flow control device, flow control equipment and a storage medium, so that flow control is accurately performed on a service server, and further load balancing of the service server is accurately achieved.

In a first aspect, the present application provides a flow control method, which is applied to a service system, where the service system includes a load balancing device and a plurality of service servers, and the flow control method includes:

after receiving a service request, the load balancing equipment determines a target flow corresponding to the service request;

the load balancing equipment determines a corresponding service server with the minimum sent service request flow as a target service server according to the sent service request flows corresponding to the service servers respectively, wherein the sent service request flow is a total flow corresponding to a service request sent to the service server within a preset time period;

and the load balancing equipment sends a service request to the target service server.

Optionally, the determining, by the load balancing device, the service server with the minimum sent service request traffic as the target service server according to the sent service request traffic corresponding to each of the plurality of service servers includes: if a preset state change event of the service system is monitored, the load balancing equipment acquires sent service request flows corresponding to the plurality of service servers respectively, the load balancing equipment determines the service server with the minimum sent service request flow as a target service server according to the sent service request flows corresponding to the plurality of service servers respectively, and the preset state change event at least comprises the establishment of the service server or the restarting of the service server; or, if the sent service request traffic of one of the service servers changes, the load balancing device obtains the sent service request traffic corresponding to each of the service servers, and the load balancing device determines the service server with the minimum sent service request traffic as the target service server according to the sent service request traffic corresponding to each of the service servers.

Optionally, if it is monitored that a preset state change event occurs in the service system, the load balancing device obtains sent service request flows corresponding to the plurality of service servers, respectively, where the method includes: if a preset state change event of the service system is monitored, the load balancing equipment compares the service server information obtained according to a preset interface with the service server information running in the memory; if the new service server is added, the load balancing equipment acquires the sent service request flow corresponding to the new service server by initializing preset parameters; if the service server is the existing service server, the load balancing device imports the operation parameters in the corresponding memory into the existing service server, and acquires the sent service request flow corresponding to the existing service server.

Optionally, the preset parameter includes a flow ratio, and if the new service server is found, the load balancing device obtains a sent service request flow corresponding to the new service server by initializing the preset parameter, including: if the value of the flow proportion corresponding to the newly added service server is the first preset value, the sent service request flow corresponding to the newly added service server is the maximum value of the sent service request flows of all the service servers; or, if the value of the traffic proportion corresponding to the newly added service server is the second preset value, the sent service request traffic corresponding to the newly added service server is the sum of the sent service request traffic corresponding to all the service servers.

Optionally, the determining, by the load balancing device, the service server with the minimum sent service request traffic as the target service server according to the sent service request traffic corresponding to each of the plurality of service servers includes: if the sent service request flows corresponding to the service servers are all the third preset values, the load balancing equipment determines that one service server randomly selected from the service servers is a target service server; or, if the traffic of the sent service request corresponding to each of the plurality of service servers is not all equal to the third preset value, the load balancing device determines the service server corresponding to the minimum traffic of the sent service request as the target service server.

Optionally, after determining, by the load balancing device, that the corresponding service server with the minimum sent service request traffic is the target service server, the method further includes: the load balancing equipment stores the identification of the target service server to a forwarding queue; the load balancing equipment sends a service request to a target service server, and the method comprises the following steps: and the load balancing equipment acquires the identifier of the target service server from the forwarding queue and sends a service request to the target service server according to the identifier of the target service server.

Optionally, the flow control method further includes: if the traffic server is configured with the flow control rule, the load balancing equipment determines the sent traffic request flow of the corresponding traffic server according to the flow control rule.

Optionally, the flow control rule is a flow control rule of a slow-start service server, and if the flow control rule is configured for the service server, the load balancing device determines, according to the flow control rule, a sent service request flow of the corresponding service server, including: if the business server is configured with the flow control rule of the slow-start business server, the load balancing equipment controls the business server to realize slow start according to the flow control rule of the slow-start business server, and determines the sent business request flow of the corresponding business server.

Optionally, the flow control method further includes: if the business server is configured with a flow control rule of the slow-start business server and the current flow of the business server represents the speed-limiting flow, determining the sent business request flow of the business server as the current flow; or, if the traffic server configures a slow-start traffic server flow control rule, and the value of the flow ratio indicates the rate-limited flow, determining that the sent traffic request flow of the traffic server is the sum of the sent traffic request flows corresponding to the traffic servers which do not perform the rate-limited flow, or determining that the sent traffic request flow of the traffic server is the sum of the sent traffic request flows corresponding to all the traffic servers.

In a second aspect, the present application provides a flow control device, which is applied to a service system, where the service system includes a load balancing device and a plurality of service servers, and the flow control device includes:

the first determining module is used for determining the target flow corresponding to the service request after receiving the service request;

the second determining module is used for determining the corresponding service server with the minimum sent service request flow as a target service server according to the sent service request flows corresponding to the service servers respectively, wherein the sent service request flow is a total flow corresponding to a service request sent to the service server within a preset time period;

and the processing module is used for sending the service request to the target service server.

Optionally, the second determining module is specifically configured to: if a preset state change event of the service system is monitored, acquiring sent service request flows corresponding to the plurality of service servers respectively, determining the service server with the minimum corresponding sent service request flow as a target service server by the load balancing equipment according to the sent service request flows corresponding to the plurality of service servers respectively, wherein the preset state change event at least comprises the establishment of the service server or the restarting of the service server; or, if the sent service request traffic of one of the service servers changes, the sent service request traffic corresponding to each of the service servers is obtained, and the load balancing device determines the service server with the minimum sent service request traffic as the target service server according to the sent service request traffic corresponding to each of the service servers.

Optionally, the second determining module is specifically configured to, when it is monitored that a preset state change event occurs in the service system, acquire sent service request flows corresponding to the plurality of service servers, respectively: if a preset state change event of the service system is monitored, comparing service server information obtained according to a preset interface with service server information running in a memory; if the new service server is the new service server, acquiring a sent service request flow corresponding to the new service server by initializing preset parameters; and if the service request flow is the existing service server, importing the operation parameters in the corresponding memory into the existing service server, and acquiring the sent service request flow corresponding to the existing service server.

Optionally, the preset parameter includes a flow ratio, and the second determining module is specifically configured to, when the second determining module is configured to obtain, if the second determining module is a new service server, the sent service request flow corresponding to the new service server by initializing the preset parameter: if the value of the flow proportion corresponding to the newly added service server is the first preset value, the sent service request flow corresponding to the newly added service server is the maximum value of the sent service request flows of all the service servers; or, if the value of the traffic proportion corresponding to the newly added service server is the second preset value, the sent service request traffic corresponding to the newly added service server is the sum of the sent service request traffic corresponding to all the service servers.

Optionally, the second determining module, when being configured to determine, according to the sent service request traffic corresponding to each of the plurality of service servers, that the service server corresponding to the sent service request traffic with the minimum sent service request traffic is the target service server, is specifically configured to: if the sent service request flows corresponding to the service servers are all the third preset values, determining that one service server randomly selected from the service servers is a target service server; or, if the sent service request traffic corresponding to each of the plurality of service servers is not all the third preset value, determining the service server with the minimum sent service request traffic as the target service server.

Optionally, after determining, in the multiple service servers, that the corresponding service server with the minimum sent service request traffic is the target service server, the second determining module is further configured to: storing the identification of the target service server to a forwarding queue; the processing module is specifically configured to: and acquiring the identifier of the target service server from the forwarding queue, and sending a service request to the target service server according to the identifier of the target service server.

Optionally, the second determining module is further configured to: and if the traffic server is configured with the traffic control rule, determining the sent traffic request flow of the corresponding traffic server according to the traffic control rule.

Optionally, the flow control rule is a slow-start service server flow control rule, and the second determining module is specifically configured to, when determining, according to the flow control rule, a sent service request flow of a corresponding service server if the service server configures the flow control rule: if the business server is configured with the flow control rule of the slow-start business server, the business server is controlled to realize slow start according to the flow control rule of the slow-start business server, and the sent business request flow of the corresponding business server is determined.

Optionally, the second determining module is further configured to: if the business server is configured with a flow control rule of the slow-start business server and the current flow of the business server represents the speed-limiting flow, determining the sent business request flow of the business server as the current flow; or, if the traffic server configures a slow-start traffic server flow control rule, and the value of the flow ratio indicates the rate-limited flow, determining that the sent traffic request flow of the traffic server is the sum of the sent traffic request flows corresponding to the traffic servers which do not perform the rate-limited flow, or determining that the sent traffic request flow of the traffic server is the sum of the sent traffic request flows corresponding to all the traffic servers.

In a third aspect, the present application provides an electronic device, comprising: a memory and a processor;

the memory is used for storing program instructions;

the processor is configured to call program instructions in the memory to perform the flow control method according to the first aspect of the present application.

In a fourth aspect, the present application provides a computer-readable storage medium having computer program instructions stored therein, which when executed, implement the flow control method according to the first aspect of the present application.

In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements a flow control method as described in the first aspect of the present application.

According to the flow control method, the flow control device, the flow control equipment and the storage medium, after receiving the service request, the load balancing equipment determines the target flow corresponding to the service request, determines the service server with the minimum corresponding sent service request flow as the target service server according to the sent service request flows corresponding to the plurality of service servers respectively, and sends the service request to the target service server. According to the method, the load balancing equipment determines the service server with the minimum corresponding sent service request flow as the target service server according to the sent service request flow corresponding to each service server, and sends the service request to the target service server, so that the flow control can be accurately carried out on the service servers, and the load balancing of the service servers is more accurately realized.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.

Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;

fig. 2 is a flowchart of a flow control method according to an embodiment of the present application;

fig. 3 is a flowchart of a flow control method according to another embodiment of the present application;

FIG. 4 is a schematic structural diagram of a flow control device according to an embodiment of the present disclosure;

FIG. 5 is a schematic view of a flow control device according to another embodiment of the present application;

FIG. 6 is a schematic diagram of a flow control system provided in accordance with an embodiment of the present application;

fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

First, some technical terms related to the present application are explained:

k8s, namely, a container cluster management system, is an open-source platform, and can realize functions of automatic deployment, automatic expansion or contraction, maintenance and the like of a container cluster.

An application (pod) instance, abbreviated pod, i.e., the smallest or simplest basic unit created or deployed by k8s, represents one process running on the k8s cluster, each process containing Internet Protocol (IP) address information.

Load balancer (Haproxy), a free and open source software written in C language, provides high availability, load balancing, and application proxies based on Transmission Control Protocol (TCP) and hypertext Transfer Protocol (HTTP).

Flow size (flow _ size) is used to indicate the total flow size of the pod.

And the flow rate proportion (flow _ ratio) is used for indicating the flow rate proportion entering the pod, the default value is 100%, if the flow rate proportion is 0< flow _ ratio < 100%, the flow rate is limited, and if the flow rate proportion is 0, the flow rate state is closed, namely the flow rate is not received.

Slow start time (slow start time) to indicate slow start countdown (in seconds).

And a flow recovery policy (flow _ recovery _ policy) used for indicating that when the slow start countdown is recovered to 0 (0 indicates that the slow start time is ended, and it is determined whether the flow rate of the control pod needs to be recovered to 100%, that is, a normal flow node), and a parameter adjustment manner of the flow _ ratio, where a flow _ recovery _ policy value of 0 indicates that the flow _ ratio is manually adjusted, and a flow _ recovery _ policy value of 1 indicates that the flow _ ratio is automatically adjusted (i.e., the flow _ ratio is automatically recovered to 100%).

A slow start state (slow _ start _ status) indicating whether to turn on slow start, 0 indicating on, 1 indicating off (default off); in the slow-start off state, the slow _ start _ time and flow _ recovery _ policy parameters are not asserted.

In a multi-instance and multi-application business service environment, a load balancer is one of indispensable components in a physical architecture environment or a container architecture environment as an entrance of a unified business service. Examples of widely used loadbalancers are Haproxy and Nginx.

Currently, in the k8s environment, the forwarding number is generally allocated according to the weight value of the traffic server, so as to achieve load balancing. However, in the above-mentioned method for implementing load balancing, the weight values of the service servers are configured in advance, that is, the ratio of the number of requests assigned to each service server is fixed, and cannot be automatically adjusted according to the traffic size corresponding to the service request, and therefore, load balancing cannot be implemented. Illustratively, the weight values of the service server 1 and the service server 2 are both 5, and the service server 1 and the service server 2 are configured in Nginx, and the service request is forwarded to the service server 1 and the service server 2 in a polling manner, but the traffic of each service request is different, which results in that the traffic of the service requests processed by the service server 1 and the service server 2 is different, and the core purpose of load balancing is to achieve load balancing of the traffic, so that the above manner cannot accurately achieve load balancing of the service servers. Also, in the k8s environment, when applying slow-start traffic warm-up, it is not possible to control different traffic proportions to a given traffic server at different time periods. Illustratively, in the Nginx polling algorithm, a slow start (slow _ start) parameter is provided for applying slow start traffic preheating, an Nginx implementation module corresponding to the slow _ start parameter is represented by ngx _ http _ upstream _ module, and a specific Nginx configuration method corresponding to the slow _ start parameter is as follows:

where, slow _ start is 30s, which means that the weight (weight) is restored from 0 to 5 within 30 s.

The specific Nginx configuration method corresponding to the slow _ start parameter is to apply slow-start traffic preheating based on weight (weight), and therefore, different traffic proportions cannot be controlled to a specified service server in different time periods.

Based on the above problems, the present application provides a flow control method, apparatus, device, and storage medium, which implement load balancing of service servers according to sent service request flows corresponding to a plurality of service servers, and therefore can implement load balancing of the service servers more accurately.

First, an application scenario of the solution provided in the present application will be described below.

Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application. As shown in fig. 1, in the application scenario, the client 110 sends a service request to the load balancing device 120, and the load balancing device 120 determines a target service server from the plurality of service servers 130 and sends the service request to the target service server. The specific implementation process of the load balancing device 120 for determining the target service server from the plurality of service servers 130 may refer to the schemes of the following embodiments.

It should be noted that fig. 1 is only a schematic diagram of an application scenario provided in this embodiment, and this embodiment of the present application does not limit the devices included in fig. 1, and also does not limit the positional relationship between the devices in fig. 1.

Next, a flow rate control method will be described by way of specific embodiments. In the following application embodiments, a service server is taken as a pod as an example for explanation.

Fig. 2 is a flowchart of a flow control method provided in an embodiment of the present application, and is applied to a service system, where the service system includes a load balancing device and a plurality of service servers. As shown in fig. 2, the method of the embodiment of the present application includes:

s201, after receiving a service request, a load balancing device determines a target flow corresponding to the service request.

In the embodiment of the present application, the load balancing device may be, for example, a load balancer, and the load balancer may operate in a separate server. The service request may be input by a user to the load balancing device executing the embodiment of the method, or may be sent by another device to the load balancing device executing the embodiment of the method. Illustratively, the service request is an HTTP request, and after receiving the HTTP request, the load balancing device may determine a target traffic corresponding to the service request according to a body data size corresponding to an HTTP protocol, where the target traffic is, for example, 1M.

S202, the load balancing equipment determines the corresponding service server with the minimum sent service request flow as a target service server according to the sent service request flows corresponding to the service servers respectively.

The sent service request flow is a total flow corresponding to the service request sent to the service server in a preset time period. The plurality of servers are all servers in the service system; each server has corresponding sent traffic.

Illustratively, the service server is a pod, and the preset time period is, for example, the time period from the beginning of the operation of the pod to the current time. The load balancing device may determine, according to the sent service request traffic corresponding to each pod among the multiple pods, the pod with the minimum corresponding sent service request traffic as the target pod. For how the load balancing device determines that the pod with the minimum traffic of the corresponding sent service request is the target pod, reference may be made to the following embodiments. Illustratively, for example, if there are three pod, which are pod1, pod2, and pod3, and the load balancing apparatus determines that the sent service request traffic corresponding to pod1 is 8M for the sent service request traffic corresponding to pod 10M, pod2 and 9M for the sent service request traffic corresponding to pod3, the load balancing apparatus may determine that pod2 is the target pod.

S203, the load balancing device sends a service request to the target service server.

In this step, the load balancing device sends a service request to the target service server after determining the target service server. Optionally, the load balancing device may update the sent service request traffic corresponding to the target service server according to the target traffic. Illustratively, the target service server is pod2, and the target traffic corresponding to the service request is 1M, the load balancing device sends the service request to the pod2, and updates the sent service request traffic corresponding to the target service server according to the target traffic 1M.

After the load balancing device executes step S203, the load balancing device repeatedly executes steps S201 to S203 for the received new service request, thereby implementing traffic-based load balancing.

In the traffic control method provided by the embodiment of the application, after receiving a service request, a load balancing device determines a target traffic corresponding to the service request, the load balancing device determines a service server with the minimum traffic of the corresponding sent service request as the target service server in a plurality of service servers, and the load balancing device sends the service request to the target service server. The load balancing device of the embodiment of the application determines the service server with the minimum corresponding sent service request flow as the target service server according to the sent service request flow corresponding to each service server, and sends the service request to the target service server, so that the flow control can be accurately performed on the service server, and the load balancing of the service server is further accurately realized.

On the basis of the foregoing embodiment, optionally, if the traffic server configures the flow control rule, the load balancing device determines the sent traffic request flow of the corresponding traffic server according to the flow control rule.

Illustratively, the service server is a pod, each pod corresponds to a configuration file, and the parameters configured in the configuration file may include: flow _ ratio, slow _ start _ time, flow _ recovery _ policy, slow _ start _ status. The parameter settings in the configuration files corresponding to the respective pod can be the same or different, that is, the parameters can be configured as required. It is understood that the parameter settings in the configuration file may correspond to different flow control rules. If the pod is configured with the flow control rule, the load balancing device determines the sent service request flow of the corresponding pod according to the flow control rule.

Further, the flow control rule is a flow control rule of a slow-start service server, and if the flow control rule is configured for the service server, the load balancing device determines the sent service request flow of the corresponding service server according to the flow control rule, which may include: if the business server is configured with the flow control rule of the slow-start business server, the load balancing equipment controls the business server to realize slow start according to the flow control rule of the slow-start business server, and determines the sent business request flow of the corresponding business server.

Optionally, if the traffic server configures a traffic control rule of the slow-start traffic server, and the current traffic of the traffic server indicates the rate-limiting traffic, determining the sent traffic request traffic of the traffic server as the current traffic; or, if the traffic server configures a slow-start traffic server flow control rule, and the value of the flow ratio indicates the rate-limited flow, determining that the sent traffic request flow of the traffic server is the sum of the sent traffic request flows corresponding to the traffic servers which do not perform the rate-limited flow, or determining that the sent traffic request flow of the traffic server is the sum of the sent traffic request flows corresponding to all the traffic servers. Specifically, under the condition that the value of the flow rate proportion is one hundred percent, the sent service request flow rate of the service server is the sum of the sent service request flow rates corresponding to all other service servers which do not perform the speed-limiting flow rate; when the value of the traffic proportion is greater than zero and less than one hundred percent, the sent traffic request traffic of the service server is the sum of the sent traffic request traffic corresponding to all the service servers.

For example, if the slow _ start _ status is configured in the configuration file corresponding to the pod to be 0, it indicates that the flow preheating control applying the slow start is started (that is, the flow control rule is a slow-start service server flow control rule, and indicates the speed limit flow), and may control the pod to implement the slow start according to the parameter setting in the configuration file corresponding to the pod, and determine the sent service request flow (that is, flow _ size) of the pod. Specifically, with flow _ size0<flow_ratio<100Indicating the size of flow _ size corresponding to pod under the condition of limited flow rate; by Sumflow_ratio=100The sum of the sent service request flows corresponding to other pod which does not carry out the speed-limited flow under the condition of indicating the speed-limited flow; using tmp _ Sum0<flow_ratio<100And under the condition of indicating the speed limit flow, the sum of the sent service request flows corresponding to all the pod. If the parameters in the configuration file corresponding to the pod are set as follows: slow _ start _ status is 0, slow _ start _ time>0,flow_recovery_policy=1,0<flow_ratio<100%, the flow _ size of the sent service request traffic of the pod can be determined in the following three ways0<flow_ratio<100

(1) If tmp _ Sum0<flow_ratio<100And flow _ size0<flow_ratio<100Equal, flow _ size is determined0<flow_ratio<100The value of (2) is the current sent service request flow of the pod, namely, the value is kept unchanged;

(2) if it isFlow _ size0<flow_ratio<100And tmp _ Sum0<flow_ratio<100The values of the nodes are the minimum values of all currently sent service request flows of all the pod;

(3) if tmp _ Sum0<flow_ratio<100Less than flow _ size0<flow_ratio<100Flow _ size0<flow_ratio<100And tmp _ Sum0<flow_ratio<100All values of (are Sum)flow_ratio=100

When the slow _ start _ time countdown is 0, automatically setting the flow _ ratio to be 100%, at this time, taking the flow _ size value corresponding to the pod as the maximum value of the sent service request flows of all current pods, and setting the slow _ start _ status to be 1, which indicates that the flow preheating control of the slow start is finished.

If the parameters in the configuration file corresponding to the pod are set as follows: slow _ start _ status is 0, slow _ start _ time is 0, flow _ recovery _ policy is 0, 0<flow_ratio<100%, the flow _ size of the sent service request traffic of the pod can be determined in the following three ways0<flow_ratio<100

(1) If tmp _ Sum0<flow_ratio<100And flow _ size0<flow_ratio<100Equal, flow _ size is determined0<flow_ratio<100The value of (2) is the current sent service request flow of the pod, namely, the value is kept unchanged;

(2) if it isFlow _ size0<flow_ratio<100And tmp _ Sum0<flow_ratio<100The values of the nodes are the minimum values of all currently sent service request flows of all the pod;

(3) if tmp _ Sum0<flow_ratio<100Less than flow _ size0<flow_ratio<100Flow _ size0<flow_ratio<100And tmp _ Sum0<flow_ratio<100All values of (are Sum)flow_ratio=100

If the parameters in the configuration file corresponding to the pod are set as follows: slow _ start _ status is 1 (i.e., slow start of application shutdown, where neither slow _ start _ time nor flow _ recovery _ policy is in effect), 0<flow_ratio<100% of the total amount of the active ingredients can be removedThe flow _ size of the transmitted service request traffic of the pod is determined in the following three ways0<flow_ratio<100

(1) If tmp _ Sum0<flow_ratio<100And flow _ size0<flow_ratio<100Equal, flow _ size is determined0<flow_ratio<100The value of (2) is the current sent service request flow of the pod, namely, the value is kept unchanged;

(2) if it isFlow _ size0<flow_ratio<100And tmp _ Sum0<flow_ratio<100The values of the nodes are the minimum values of all currently sent service request flows of all the pod;

(3) if tmp _ Sum0<flow_ratio<100Less than flow _ size0<flow_ratio<100Flow _ size0<flow_ratio<100And tmp _ Sum0<flow_ratio<100All values of (are Sum)flow_ratio=100

Through parameter setting (i.e., flow control rule) in the configuration file corresponding to the service server, the load balancing device may determine the sent service request flow of the corresponding service server, so that the load balancing device determines, in the plurality of service servers, the service server with the minimum sent service request flow as the target service server. Therefore, different flow proportions can be controlled to the specified service server in different time periods, and the flow preheating control of the application slow start is realized.

Fig. 3 is a flowchart of a flow control method according to another embodiment of the present application. On the basis of the above embodiments, the embodiments of the present application further describe how to implement load balancing. As shown in fig. 3, the method of the embodiment of the present application may include:

s301, after receiving the service request, the load balancing device determines a target flow corresponding to the service request.

For a detailed description of this step, reference may be made to the description related to S201 in the embodiment shown in fig. 2, and details are not described here.

In the embodiment of the present application, the step S202 in fig. 2 may be further refined into two steps S302 and S303 as follows:

s302, if a preset state change event of the service system is monitored, the load balancing equipment acquires the sent service request flow corresponding to each of the plurality of service servers, and the load balancing equipment determines the service server with the minimum corresponding sent service request flow as a target service server according to the sent service request flow corresponding to each of the plurality of service servers.

The preset state change event at least comprises the establishment of a service server or the restart of the service server. Illustratively, the service server is a pod, in a k8s environment, the load balancing device monitors whether a value of a default parameter resource version (resourceVersion) of a k8s orchestrated file changes, and if the value of resourceVersion changes, indicating that a new pod may be created or the pod is restarted, the load balancing device obtains sent service request traffic corresponding to each of the plurality of pods. After the load balancing device obtains the sent service request traffic corresponding to each of the plurality of service servers, it may determine, according to the sent service request traffic corresponding to each of the plurality of service servers, the service server with the minimum corresponding sent service request traffic as the target service server.

Further, if it is monitored that a preset state change event occurs in the service system, the acquiring, by the load balancing device, sent service request flows corresponding to the plurality of service servers, may include: if a preset state change event of the service system is monitored, the load balancing equipment compares the service server information obtained according to a preset interface with the service server information running in the memory; if the new service server is added, the load balancing equipment acquires the sent service request flow corresponding to the new service server by initializing preset parameters; if the service server is the existing service server, the load balancing device imports the operation parameters in the corresponding memory into the existing service server, and acquires the sent service request flow corresponding to the existing service server.

Optionally, the preset parameter includes a flow ratio, and if the new service server is determined, the load balancing device obtains the sent service request flow corresponding to the new service server by initializing the preset parameter, and may further include: if the value of the flow proportion corresponding to the newly added service server is the first preset value, the sent service request flow corresponding to the newly added service server is the maximum value of the sent service request flows of all the service servers; or, if the value of the traffic proportion corresponding to the newly added service server is the second preset value, the sent service request traffic corresponding to the newly added service server is the sum of the sent service request traffic corresponding to all the service servers. Specifically, the first preset value is, for example, 100%; the second preset value is, for example, a value between 0 and 100%, excluding 0 and 100%, and at this time, only the service server corresponding to the traffic proportion in the current service system is in the activated state.

Illustratively, the service server is a pod, and the preset parameter is a parameter configured in a configuration file corresponding to the pod, and specifically, the preset parameter may include: flow _ ratio, slow _ start _ time, flow _ recovery _ policy, slow _ start _ status. The predetermined Interface is, for example, an Application Programming Interface (API) provided for k8 s. Illustratively, when monitoring that a preset state change event occurs in the service system, the load balancing device compares the IP information corresponding to the plurality of pod obtained according to the preset interface with the IP information corresponding to the plurality of pod running in the memory, and if the pod is determined to be a newly added pod, the load balancing device obtains the sent service request traffic corresponding to the newly added pod by initializing the preset parameters. Specifically, the sent service request traffic corresponding to the newly added pod can be determined in the following two ways:

(1) if the parameters in the configuration file corresponding to the newly added pod are set as follows: if the flow _ ratio is 100%, the flow _ size is the maximum value of the sent service request flows of all the current pods;

(2) if the parameters in the configuration file corresponding to the newly added pod are set as follows: 0<flow_ratio<Flow _ size at 100%0<flow_ratio<100Is an initial value of Sumflow_ratio=100And recording tmp _ Sum0<flow_ratio<100Of (2) is initiatedValue and flow _ size0<flow_ratio<100To the memory. By SumAllRepresents the Sum of the sent service request flows corresponding to all the pod, if SumAllIs 0, flow _ size0<flow_ratio<100Is 0; if SumAllIf it is greater than 0, flow _ size0<flow_ratio<100Is taken as Sumflow_ratio=100

If the existing pod is determined, the load balancing device copies the operation parameter value (i.e., flow _ size) in the corresponding memory to the existing pod, and acquires the sent service request traffic corresponding to the existing pod, so as to maintain the operation state of the existing pod and ensure the accuracy of traffic calculation.

Optionally, if it is determined that the service server obtained according to the preset interface is not already running in the memory, the load balancing device deletes the data corresponding to the service server that is not already running in the memory.

For example, if it is determined that the pod obtained according to the preset interface is not already running in the memory, the corresponding data of the pod which is not already running in the memory is deleted according to the IP corresponding to the pod which is not already running in the memory.

Optionally, the determining, by the load balancing device, the service server with the minimum sent service request traffic as the target service server according to the sent service request traffic corresponding to each of the plurality of service servers may include: if the sent service request flows corresponding to the service servers are all the third preset values, the load balancing equipment determines that one service server randomly selected from the service servers is a target service server; or, if the traffic of the sent service request corresponding to each of the plurality of service servers is not all equal to the third preset value, the load balancing device determines the service server corresponding to the minimum traffic of the sent service request as the target service server.

Illustratively, the third preset value is, for example, 0. And if the sent service request traffic corresponding to all the pods acquired by the load balancing device is 0, determining that one pod randomly selected from all the pods is the target pod. If the traffic of the sent service request corresponding to all the pod obtained by the load balancing device is not all 0, the load balancing device determines that the pod with the minimum traffic of the corresponding sent service request is the target pod. Optionally, if there are multiple identical pod with the minimum traffic of the sent service request, one pod is randomly selected as the target pod.

And S303, if the sent service request flow of one of the service servers changes, the load balancing equipment acquires the sent service request flows corresponding to the service servers respectively, and the load balancing equipment determines the service server with the minimum corresponding sent service request flow as the target service server according to the sent service request flows corresponding to the service servers respectively.

Illustratively, after sending a service request to a pod, the load balancing device updates the sent service request traffic corresponding to the pod, that is, the sent service request traffic corresponding to the pod changes, and according to the change of the sent service request traffic corresponding to the pod, the load balancing device obtains the sent service request traffic corresponding to each of all the pods from the memory. After the load balancing device obtains the sent service request traffic corresponding to each of the plurality of pods, it may determine, according to the sent service request traffic corresponding to each of the plurality of pods, that the pod with the minimum corresponding sent service request traffic is the target pod.

It should be noted that, the present application does not limit the execution sequence of S302 and S303.

S304, the load balancing equipment stores the identification of the target service server to the forwarding queue.

In this step, the queue is a data structure, and is also a special linear table, and only deletion operation is allowed at the front end (front) of the table, and insertion operation is allowed at the back end (rear) of the table, and the queue is, for example, a sequence queue or a linked list queue. A forwarding queue is a queue used to store the identification of a target traffic server. Illustratively, the load balancing device stores the IP of the target pod to the forwarding queue after determining the target pod.

Optionally, if the flow _ ratio value corresponding to the service server is a second preset value, and the second preset value is 0, storing the identifier of the service server to a return queue (the return queue is a queue for storing the identifier of the service server whose flow _ ratio value is the second preset value), and accordingly, the flow of the sent service request of the service server does not change, that is, remains unchanged.

S305, the load balancing equipment acquires the identification of the target service server from the forwarding queue and sends a service request to the target service server according to the identification of the target service server.

Illustratively, the load balancing device obtains the IP of the target pod from the forwarding queue, and sends a service request to the target pod according to the IP of the target pod.

After step S305, the load balancing device may update the sent service request traffic corresponding to the target service server according to the target traffic. Further, optionally, updating the sent service request traffic corresponding to the target service server according to the target traffic may include: and the load balancing equipment updates the sent service request flow corresponding to the target service server into the sum of the target flow and the sent service request flow of the target service server before the service request is forwarded.

Illustratively, the target traffic is 1M, the sent service request traffic of the target pod before forwarding the service request is 9M, and the load balancing device updates the sent service request traffic corresponding to the target pod to 10M.

Further, optionally, if the parameters in the configuration file corresponding to the target service server are set as: when the slow _ start _ status is equal to 1 (at this time, the slow _ start _ time and the flow _ recovery _ policy are not in effect), and the flow _ ratio is equal to 100%, the load balancing device updates the sent service request traffic corresponding to the target service server to the sum of the target traffic and the sent service request traffic before the target service server forwards the service request.

After the load balancing device executes step S305, the load balancing device repeatedly executes steps S301 to S305 for the received new service request, thereby implementing traffic-based load balancing.

According to the flow control method provided by the embodiment of the application, after receiving a service request, a load balancing device determines a target flow corresponding to the service request, and if a preset state change event of a service system is monitored, or if a sent service request flow of one service server in a plurality of service servers changes, the load balancing device acquires the sent service request flows corresponding to the plurality of service servers respectively, so that the sent service request flows corresponding to the plurality of service servers respectively can be accurately acquired in time; the method comprises the steps that a service server with the minimum corresponding sent service request flow is determined as a target service server according to the sent service request flows corresponding to a plurality of service servers respectively, a load balancing device stores an identifier of the target service server to a forwarding queue, the load balancing device obtains the identifier of the target service server from the forwarding queue and sends a service request to the target service server according to the identifier of the target service server, and the service request can be guaranteed to be accurately sent to the target service server. The load balancing equipment of the embodiment of the application determines the service server with the minimum corresponding sent service request flow as the target service server according to the sent service request flow corresponding to each service server, and sends the service request to the target service server, so that the flow control can be accurately carried out on the service servers, and the load balancing of the service servers is more accurately realized

The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.

Fig. 4 is a schematic structural diagram of a flow control device according to an embodiment of the present application, which is applied to a service system, where the service system includes a load balancing device and a plurality of service servers. As shown in fig. 4, a flow control device 400 according to an embodiment of the present application includes: a first determining module 401, a second determining module 402 and a processing module 403. Wherein:

the first determining module 401 is configured to determine a target traffic corresponding to a service request after receiving the service request.

A second determining module 402, configured to determine, according to the sent service request traffic corresponding to each of the plurality of service servers, that the corresponding service server with the smallest sent service request traffic is the target service server, where the sent service request traffic is a total traffic corresponding to the service request sent to the service server within a preset time period.

The processing module 403 is configured to send a service request to the target service server.

Optionally, the second determining module 402 may be specifically configured to: if a preset state change event of the service system is monitored, acquiring sent service request flows corresponding to the plurality of service servers respectively, determining the service server with the minimum corresponding sent service request flow as a target service server by the load balancing equipment according to the sent service request flows corresponding to the plurality of service servers respectively, wherein the preset state change event at least comprises the establishment of the service server or the restarting of the service server; or, if the sent service request traffic of one of the service servers changes, the sent service request traffic corresponding to each of the service servers is obtained, and the load balancing device determines the service server with the minimum sent service request traffic as the target service server according to the sent service request traffic corresponding to each of the service servers.

Optionally, the second determining module 402, when configured to acquire the sent service request traffic corresponding to each of the plurality of service servers if it is monitored that the service system has the preset state change event, may be specifically configured to: if a preset state change event of the service system is monitored, comparing service server information obtained according to a preset interface with service server information running in a memory; if the new service server is the new service server, acquiring a sent service request flow corresponding to the new service server by initializing preset parameters; and if the service request flow is the existing service server, importing the operation parameters in the corresponding memory into the existing service server, and acquiring the sent service request flow corresponding to the existing service server.

Optionally, the preset parameter includes a flow ratio, and the second determining module 402 is configured to, when the second determining module is configured to obtain the sent service request flow corresponding to the newly added service server by initializing the preset parameter if the second determining module is the newly added service server, specifically: if the value of the flow proportion corresponding to the newly added service server is the first preset value, the sent service request flow corresponding to the newly added service server is the maximum value of the sent service request flows of all the service servers; or, if the value of the traffic proportion corresponding to the newly added service server is the second preset value, the sent service request traffic corresponding to the newly added service server is the sum of the sent service request traffic corresponding to all the service servers.

In some embodiments, the second determining module 402, when configured to determine, according to the sent service request traffic corresponding to each of the plurality of service servers, that the service server with the minimum sent service request traffic is the target service server, may be specifically configured to: if the sent service request flows corresponding to the service servers are all the third preset values, determining that one service server randomly selected from the service servers is a target service server; or, if the sent service request traffic corresponding to each of the plurality of service servers is not all the third preset value, determining the service server with the minimum sent service request traffic as the target service server.

Optionally, after the second determining module 402 determines, among the multiple service servers, that the corresponding service server with the minimum sent service request traffic is the target service server, the second determining module may further be configured to: storing the identification of the target service server to a forwarding queue; the processing module 403 may be specifically configured to: and acquiring the identifier of the target service server from the forwarding queue, and sending a service request to the target service server according to the identifier of the target service server.

Optionally, the second determining module 402 may further be configured to: and if the traffic server is configured with the traffic control rule, determining the sent traffic request flow of the corresponding traffic server according to the traffic control rule.

Optionally, the flow control rule is a slow-start service server flow control rule, and the second determining module 402 is configured to, if the flow control rule is configured in the service server, determine the sent service request flow of the corresponding service server according to the flow control rule, specifically: if the business server is configured with the flow control rule of the slow-start business server, the business server is controlled to realize slow start according to the flow control rule of the slow-start business server, and the sent business request flow of the corresponding business server is determined.

Optionally, the second determining module 402 is further configured to: if the business server is configured with a flow control rule of the slow-start business server and the current flow of the business server represents the speed-limiting flow, determining the sent business request flow of the business server as the current flow; or, if the traffic server configures a slow-start traffic server flow control rule, and the value of the flow ratio indicates the rate-limited flow, determining that the sent traffic request flow of the traffic server is the sum of the sent traffic request flows corresponding to the traffic servers which do not perform the rate-limited flow, or determining that the sent traffic request flow of the traffic server is the sum of the sent traffic request flows corresponding to all the traffic servers.

The apparatus of this embodiment may be configured to implement the technical solution of any one of the above-mentioned method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.

Fig. 5 is a schematic structural diagram of a flow control device according to another embodiment of the present application. As shown in fig. 5, a flow control device 500 of an embodiment of the present application may include: a calculation module 501, a scheduling module 502 and a forwarding module 503. Wherein:

a calculating module 501, configured to obtain the latest pod running in the memory and determine a sent service request traffic corresponding to each pod; if the pod is a newly added pod, initializing parameters of the newly added pod, if the pod is an already existing pod, importing the operation parameters into the already existing pod, if the pod does not exist, deleting information corresponding to the pod which does not exist in the calculation module 501, deleting data corresponding to the pod which does not exist in the forwarding queue and the return queue, and importing the determined latest pod and the sent service request traffic data corresponding to each pod into the scheduling module 502.

A scheduling module 502, configured to determine, according to a sent service request traffic corresponding to each pod, a target pod stored to the forwarding queue; if the flow _ ratio value corresponding to the pod is 0, the pod is stored in the return queue, and the sent service request traffic corresponding to the pod is directly returned to the calculation module 501.

A forwarding module 503, configured to obtain a target pod that can be forwarded from the forwarding queue, and send a service request to the target pod; according to the service request, a target traffic corresponding to the service request is determined, and the target traffic is sent to the calculation module 501 for updating the sent service request traffic corresponding to the target pod.

It is understood that the functions of the calculation module and the scheduling module in the embodiment of the present application are similar to the functions of the second determination module in the above embodiment; the function of the forwarding module in the embodiment of the present application is similar to the functions of the first determining module and the processing module in the above embodiments.

Fig. 6 is a schematic view of a flow control system according to an embodiment of the present application, based on the flow control device shown in fig. 5. As shown in fig. 6, in the flow control system 600, the calculation module 501 obtains the latest pod601 running in the memory and determines the sent service request flow corresponding to each pod 601; if the pod601 is newly added, the parameter of the newly added pod601 is initialized, if the pod601 already exists, the operation parameter is imported to the pod601 already existing, if the pod601 does not already exist, the information corresponding to the pod601 which does not already exist in the calculation module 501 is deleted, the data corresponding to the pod601 which does not already exist in the forwarding queue 602 and the return queue 603 is deleted, and the determined latest pod601 and the sent service request traffic data corresponding to each pod601 are imported to the scheduling module 502. The scheduling module 502 determines a target pod601 stored to the forwarding queue 602 according to the sent service request traffic corresponding to each pod 601; if the flow _ ratio value corresponding to the pod601 is 0, the pod601 is stored in the return queue 603, and the sent service request traffic corresponding to the pod601 is directly returned to the calculation module 501. The forwarding module 503 acquires a target pod601 that can be forwarded from the forwarding queue 602, and sends a service request to the target pod 601; according to the service request, a target traffic corresponding to the service request is determined, and the target traffic is sent to the calculation module 501 for updating the sent service request traffic corresponding to the target pod 601.

Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Illustratively, the electronic device is, for example, a load balancing device disclosed in the present application, and the electronic device may be provided as a server or a computer. Referring to fig. 7, an electronic device 700 includes a processing component 701 that further includes one or more processors and memory resources, represented by memory 702, for storing instructions, such as applications, that are executable by the processing component 701. The application programs stored in memory 702 may include one or more modules that each correspond to a set of instructions. Furthermore, the processing component 701 is configured to execute instructions to perform any of the above-described method embodiments.

The electronic device 700 may also include a power component 703 configured to perform power management of the electronic device 700, a wired or wireless network interface 704 configured to connect the electronic device 700 to a network, and an input-output (I/O) interface 705. The electronic device 700 may operate based on an operating system stored in the memory 702, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.

The application also provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the processor executes the computer-executable instructions, the scheme of the flow control method is implemented.

The present application also provides a computer program product comprising a computer program which, when executed by a processor, implements aspects of the flow control method as above.

The computer-readable storage medium may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. Readable storage media can be any available media that can be accessed by a general purpose or special purpose computer.

An exemplary readable storage medium is coupled to the processor such the processor can read information from, and write information to, the readable storage medium. Of course, the readable storage medium may also be an integral part of the processor. The processor and the readable storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and readable storage medium may reside as discrete components in the flow control device.

Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.

Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

21页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:消息推送方法、装置、计算机设备和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类