Token bucket-based current limiting method, token bucket-based current limiting device, token bucket-based computing equipment and token bucket-based current limiting medium

文档序号:1941460 发布日期:2021-12-07 浏览:4次 中文

阅读说明:本技术 基于令牌桶的限流方法、装置、计算设备及介质 (Token bucket-based current limiting method, token bucket-based current limiting device, token bucket-based computing equipment and token bucket-based current limiting medium ) 是由 陶凯 赵亮 于 2020-10-30 设计创作,主要内容包括:本公开提供了一种基于令牌桶的限流方法,包括:接收待处理请求;获取令牌桶中令牌的数量;在令牌桶中令牌的数量小于待处理请求所需的令牌数量的情况下,根据上一次生产令牌的时间和当前时间,确定目标令牌数;为令牌桶添加目标令牌数的令牌;以及根据执行添加操作后的令牌桶中的令牌的数量,确定是否允许待处理请求通过。本公开还提供了一种基于令牌桶的限流装置、计算设备和介质。(The present disclosure provides a token bucket based current limiting method, comprising: receiving a request to be processed; obtaining the number of tokens in a token bucket; determining the number of target tokens according to the time of last token production and the current time under the condition that the number of tokens in the token bucket is less than the number of tokens required by the request to be processed; adding tokens of a target token number to the token bucket; and determining whether the pending requests are allowed to pass through according to the number of the tokens in the token bucket after the adding operation is executed. The present disclosure also provides a token bucket based current limiting apparatus, computing device, and medium.)

1. A token bucket based throttling method, comprising:

receiving a request to be processed;

obtaining the number of tokens in a token bucket;

determining a target token number according to the time of last token production and the current time under the condition that the number of tokens in the token bucket is less than the number of tokens required by the request to be processed;

adding tokens of the target token number to the token bucket; and

and determining whether the pending request is allowed to pass through according to the number of the tokens in the token bucket after the adding operation is executed.

2. The method of claim 1, wherein determining a target token count from the time of last token production and the current time comprises:

determining a time difference between the time of the last production token and the current time;

determining the token production times according to the time difference; and

and determining the number of the target tokens according to the token production times.

3. The method of claim 2, wherein said determining the target token count from the token production times comprises:

acquiring the number of tokens produced in one time; and

and determining the target token number according to the token production times and the single-time production token number.

4. The method of claim 2, wherein said determining the target token count from the token production times comprises:

in the case where the token production number is less than 1:

acquiring the accumulated credit token number and the single production token number;

under the condition that a credit condition is met, determining the number of tokens required by the request to be processed as the target number of tokens, and determining the sum of the accumulated credit token number and the number of tokens required by the request to be processed as a new accumulated credit token number; and

in the case where the token production number is greater than or equal to 1:

determining the number of producible tokens according to the token production times and the number of tokens produced in one time;

acquiring an accumulated credit token number; and

and in the case that the producible token number is larger than the accumulated credit token number, determining the difference value between the producible token number and the accumulated credit token number as the target token number, and setting the accumulated credit token number to be 0.

5. The method of claim 4, wherein the credit condition comprises:

the sum of the accumulated number of credit tokens and the number of tokens required by the request to be processed is less than the number of single-time production tokens; and/or

The sum of the time of the last token credited and the credit time interval is less than the time of the last token production.

6. The method of claim 1, wherein the determining whether to allow the pending request to pass through according to the number of tokens in the token bucket after the adding operation is performed comprises:

in the case that the number of tokens in the token bucket is greater than or equal to the number of tokens required by the pending request, deducting the number of tokens required by the pending request from the token bucket to allow the pending request to pass through; and

and rejecting the pending request to pass if the number of tokens in the token bucket is less than the number of tokens required by the pending request.

7. A token bucket based throttling apparatus comprising:

the receiving module is used for receiving the request to be processed;

the acquisition module is used for acquiring the number of tokens in the token bucket;

a first determining module, configured to determine a target token number according to the time of last token production and the current time when the number of tokens in the token bucket is smaller than the number of tokens required by the request to be processed;

an adding module, configured to add tokens of the target token number to the token bucket; and

and the passing module is used for determining whether to allow the request to be processed to pass according to the number of the tokens in the token bucket after the adding operation is executed.

8. The apparatus of claim 7, the first determining module comprising:

a second determining submodule for determining a time difference between the time of the last production token and the current time;

the third determining submodule is used for determining the token production times according to the time difference; and

and the fourth determining submodule is used for determining the number of the target tokens according to the token production times.

9. A computing device, comprising:

one or more processors;

a storage device for storing one or more programs,

wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1 to 6.

10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 6.

Technical Field

The present disclosure relates to the field of computer technologies, and in particular, to a token bucket based current limiting method, device, computing device, and medium.

Background

The token bucket based throttling scheme used by the related art requires at least two roles, one being the token producing node and one being the current limiter itself. The token production node is used for executing a timing task and adding tokens to the token bucket. The scheme needs to set token production nodes, so that more nodes exist in the distributed system, and the risk of system problems is increased invisibly.

Disclosure of Invention

In view of the above, the present disclosure provides a token bucket based throttling method, apparatus, computing device and medium.

One aspect of the present disclosure provides a token bucket-based throttling method, including: receiving a request to be processed; obtaining the number of tokens in a token bucket; determining a target token number according to the time of last token production and the current time under the condition that the number of tokens in the token bucket is less than the number of tokens required by the request to be processed; adding tokens of the target token number to the token bucket; and determining whether the pending request is allowed to pass through according to the number of the tokens in the token bucket after the adding operation is executed.

According to an embodiment of the present disclosure, the determining the target token number according to the time of last token production and the current time includes: determining a time difference between the time of the last production token and the current time; determining the token production times according to the time difference; and determining the target token number according to the token production times.

According to an embodiment of the present disclosure, the determining the target token number according to the token production times includes: acquiring the number of tokens produced in one time; and determining the target token number according to the token production times and the single-time production token number.

According to an embodiment of the present disclosure, the determining the target token number according to the token production times includes: in the case where the token production number is less than 1: acquiring the accumulated credit token number and the single production token number; under the condition that a credit condition is met, determining the number of tokens required by the request to be processed as the target number of tokens, and determining the sum of the accumulated credit token number and the number of tokens required by the request to be processed as a new accumulated credit token number; and in the case that the token production number is greater than or equal to 1: determining the number of producible tokens according to the token production times and the number of tokens produced in one time; acquiring an accumulated credit token number; and in the case that the producible token number is greater than the accumulated credit token number, determining a difference value between the producible token number and the accumulated credit token number as the target token number, and setting the accumulated credit token number to 0.

According to an embodiment of the present disclosure, the credit condition includes: the sum of the accumulated number of credit tokens and the number of tokens required by the request to be processed is less than the number of single-time production tokens; and/or the sum of the time of the last token credited and the credit time interval is less than the time of the last token production.

According to an embodiment of the present disclosure, the determining whether to allow the pending request to pass through according to the number of tokens in the token bucket after the adding operation is performed includes: in the case that the number of tokens in the token bucket is greater than or equal to the number of tokens required by the pending request, deducting the number of tokens required by the pending request from the token bucket to allow the pending request to pass through; and refusing the pending request to pass if the number of tokens in the token bucket is less than the number of tokens required by the pending request.

Another aspect of the present disclosure provides a token bucket-based throttling apparatus, including: the receiving module is used for receiving the request to be processed; the acquisition module is used for acquiring the number of tokens in the token bucket; a first determining module, configured to determine a target token number according to the time of last token production and the current time when the number of tokens in the token bucket is smaller than the number of tokens required by the request to be processed; an adding module, configured to add tokens of the target token number to the token bucket; and the passing module is used for determining whether to allow the request to be processed to pass according to the number of the tokens in the token bucket after the adding operation is executed.

According to an embodiment of the present disclosure, the first determining module includes: a second determining submodule for determining a time difference between the time of the last production token and the current time; the third determining submodule is used for determining the token production times according to the time difference; and the fourth determining submodule is used for determining the number of the target tokens according to the token production times.

Another aspect of the disclosure provides a computing device comprising: one or more processors; storage means for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as described above.

Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.

Another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the method as described above when executed.

According to the current limiting method based on the token bucket, under the condition that the tokens in the token bucket are insufficient, the adding operation of the token bucket is executed, and a timing task is not required to be additionally arranged for adding the tokens at regular time. Therefore, the method can be executed by the current limiter, and no token production node is required to be additionally arranged, so that the number of nodes of the system can be reduced, and the reliability is improved.

Drawings

The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:

fig. 1 schematically illustrates an exemplary application scenario in which a token bucket based throttling method may be applied according to an embodiment of the present disclosure;

fig. 2 schematically illustrates a flow diagram of a token bucket based throttling method according to an embodiment of the present disclosure;

FIG. 3 schematically shows a flow chart for determining a target token count according to an embodiment of the disclosure;

FIG. 4 schematically illustrates a flow chart for determining a target token count according to another embodiment of the present disclosure;

fig. 5 schematically illustrates a schematic diagram of a token bucket based throttling method according to an embodiment of the present disclosure;

fig. 6 schematically illustrates a schematic diagram of a token bucket based throttling method according to another embodiment of the present disclosure;

fig. 7 schematically illustrates a block diagram of a token bucket based throttling apparatus according to an embodiment of the present disclosure; and

FIG. 8 schematically illustrates a block diagram of a computer system suitable for the methods described by embodiments of the present disclosure, in accordance with embodiments of the present disclosure.

Detailed Description

Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.

All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.

Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).

The embodiment of the disclosure provides a token bucket-based current limiting method and a device capable of applying the method. The method comprises the steps of obtaining the number of tokens in a token bucket; determining the number of target tokens according to the time of last token production and the current time under the condition that the number of tokens in the token bucket is less than the number of tokens required by the request to be processed; adding tokens of the target token number to a token bucket; and determining whether the pending request is allowed to pass through according to the number of the tokens in the token bucket after the adding operation is executed.

Fig. 1 schematically illustrates an exemplary application scenario 100 in which a token bucket based throttling method may be applied according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of an application scenario in which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.

As shown in fig. 1, the application scenario 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104 and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.

The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).

The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.

The server 105 may be a server providing various services, such as a background management server (for example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device.

It should be noted that the token bucket-based throttling method provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, token bucket based throttling devices provided by embodiments of the present disclosure may be generally disposed in the server 105. The token bucket based throttling method provided by the embodiments of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the token bucket based throttling device provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.

It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.

Fig. 2 schematically illustrates a flow diagram of a token bucket based throttling method according to an embodiment of the present disclosure.

As shown in fig. 2, the method includes receiving a pending request in operation S210.

Then, in operation S220, the number of tokens in the token bucket is acquired.

In operation S230, in case that the number of tokens in the token bucket is less than the number of tokens required for the request to be processed, a target token number is determined according to the time of last token production and the current time.

According to the embodiment of the disclosure, the number of tokens required for different data amounts of pending requests is different. The larger the amount of data, the larger the number of tokens required.

According to an embodiment of the present disclosure, operation S230 may include, for example, determining a time difference between the time of last token production and the current time, determining a token production number according to the time difference, and then determining a target token number according to the token production number.

According to the embodiment of the disclosure, the token production times can be calculated by dividing the time difference by the token adding interval duration. The token adding interval duration can be set in advance according to needs, and the value of the token adding interval duration is not specifically limited in the disclosure.

Fig. 3 schematically shows a flow chart for determining a target token number according to an embodiment of the present disclosure.

As shown in fig. 3, the target token number may be determined by the following operations S331 to S332.

Therein, in operation S331, a single production token count is acquired.

In operation S332, a target token count is determined according to the token production times and the single-production token count.

According to the embodiment of the disclosure, the product of the token production times and the token production number of a single time can be calculated to obtain the number of the replenishable tokens, and the number of the replenishable tokens is used as the target token number. The number of single-production tokens may be set in advance as required, and the value of the number of single-production tokens added is not particularly limited in the present disclosure.

Fig. 4 schematically shows a flow chart for determining a target token number according to another embodiment of the present disclosure.

As shown in fig. 4, the target token number may also be determined by the following operations S431 to S437.

In operation S431, it is determined whether the token production number is less than 1, and if the token production number is less than 1, operations S432 to S433 are performed, and if the token production number is greater than or equal to 1, operations S434 to S436 are performed.

In operation S432, an accumulated credit token count and a single production token count are acquired.

In operation S433, in case that the credit condition is satisfied, the number of tokens required for the request to be processed is determined as the target number of tokens, and the sum of the accumulated number of credit tokens and the number of tokens required for the request to be processed is determined as a new accumulated number of credit tokens.

According to the embodiment of the disclosure, the credit condition is used for preventing the occurrence of an over-credit condition, and ensuring that the token can be normally paid when being produced next time. The credit conditions may include, for example: the sum of the accumulated number of token credits and the number of tokens required for pending requests is less than the number of single-time production tokens, and the sum of the time of the last token credit and the time interval of token credit is less than the time of the last production token.

In operation S434, a producible token count is determined based on the token production count and the number of single-production tokens.

In operation S435, an accumulated credit token number is acquired.

In operation S436, in the case that the producible token number is greater than the accumulated credit token number, a difference value between the producible token number and the accumulated credit token number is determined as a target token number, and the accumulated credit token number is set to 0.

In practical applications, one of the methods for determining the number of target tokens shown in fig. 3 and 4 may be selected and used, or both of them may be used, that is, the method shown in fig. 3 is used for determining the number of target tokens for one part of requests, and the method shown in fig. 4 is used for determining the number of target tokens for the other part of requests. For example, the target token count may be determined using the method shown in fig. 3 for a request with a lower priority, and the target token count may be determined using the method shown in fig. 4 for a request with a higher priority. By allocating tokens in advance in credit form for higher priority requests, it can be guaranteed that the higher priority requests can be processed in time.

In operation S240, tokens of a target token number are added to the token bucket.

In operation S250, it is determined whether to allow the pending request to pass through according to the number of tokens in the token bucket after the adding operation is performed.

According to an embodiment of the present disclosure, after the token bucket adds the tokens, if the number of tokens in the token bucket is greater than or equal to the number of tokens required for the pending request, the number of tokens required for the pending request may be deducted from the token bucket to allow the pending request to pass through. If the number of tokens in the token bucket is still less than the number of tokens required for the pending request, the pending request may be denied access.

According to the current limiting method based on the token bucket, under the condition that the tokens in the token bucket are insufficient, the adding operation of the token bucket is executed, and a timing task is not required to be additionally arranged for adding the tokens at regular time. Therefore, the method can be executed by the current limiter, and no token production node is required to be additionally arranged, so that the number of nodes of the system can be reduced, and the reliability is improved.

The method shown in fig. 2-4 is further described with reference to fig. 5-6 in conjunction with specific embodiments. Those skilled in the art will appreciate that the following example embodiments are only for the understanding of the present disclosure, and the present disclosure is not limited thereto.

According to the embodiment of the disclosure, a distributed current limiter program (called a current limiter for short) can be written based on a Lua script language by taking Redis as a distributed medium, and is used for executing the current limiting method based on the token bucket in the embodiment of the disclosure. Because the Redis has the characteristic of single thread, the current limiter is compiled based on the Redis and the Lua script, the atomicity of the current limiting operation can be ensured, and errors caused by asynchronous execution of each operation can be avoided.

According to an embodiment of the present disclosure, a Hash data structure in Redis may be employed to store defined token bucket objects. For example, in this embodiment, the token bucket object may have the following key variables:

bucketSize, representing the token bucket capacity.

nextadtime, which represents the time of last token production.

token size, representing the number of tokens remaining in the current token bucket.

addlntervaltime, which indicates the add token interval duration.

singleAddSize, representing the number of single production tokens.

credit token size, representing the number of credit tokens, and the initial value is 0.

nextCreditTime, which indicates the last time the token time can be credited. According to the embodiment of the disclosure, credit cannot be performed if nextcentritime > nextadtime, and the situation of infinite credit is avoided.

creatttintervaltime, representing a credit token time interval. Note that the credit token interval cannot exceed the add token interval duration.

singlerCreditSize, representing the number of single credit tokens. Note that singlecreditsize ═ singleAddSize/(addntervaltime/creattitertintervaltime).

credit count size, representing a token that accumulates credits.

Fig. 5 schematically illustrates a schematic diagram of a token bucket based throttling method according to an embodiment of the present disclosure.

As shown in fig. 5, the method may include operations S501 to S612.

Specifically, in operation S501, a request is made to enter a current limiter, and a token is applied to the current limiter.

In operation S502, the number of tokens remaining in the token bucket is acquired.

In operation S503, it is determined whether the number of remaining tokens is sufficient. If the number of remaining tokens satisfies the number of tokens required for the request, indicating that the remaining tokens are sufficient, operation S511 is performed. If the number of remaining tokens does not satisfy the number of tokens required for the request, it indicates that the remaining tokens are insufficient, and operation S504 is performed.

In operation S504, the "time of last token production" is acquired. Note that, at this time, the value of "time of last token production" is the time of the last token production closest to the current time.

In operation S505, "time difference" is calculated by subtracting "time of last token production" from "current time". The quotient is calculated by dividing the time difference by the interval duration of adding the tokens, namely the number of times of producing the tokens. If the number of producible tokens is less than 1, it is proved that the time for producing the tokens is not yet, and the number of replenishable tokens is 0. If the number of producible tokens is equal to or more than 1, the number of replenishable tokens is calculated by multiplying the number of producible tokens by the number of tokens produced per time.

In operation S506, it is determined whether the number of the replenishable tokens is greater than 0, and if the number of the replenishable tokens is less than or equal to 0, operation S507 is performed. If greater than 0, operation S508 is performed.

In operation S507, no token is returned.

In operation S508, to prevent the tokens from exceeding the token bucket capacity, the total number of tokens is derived using "the number of tokens remaining in the token bucket" plus "the number of replenishable tokens". It is determined whether the "total token number" exceeds the "token bucket capacity", and if the "total token number" exceeds the token bucket capacity, a value of the "number of tokens remaining in the token bucket" is set to the "token bucket capacity", and operation S509 is performed. If the "total number of tokens" does not exceed the capacity of the token bucket, the "number of replenishable tokens" is added to the "number of tokens remaining in the token bucket", and S510 is performed.

In operation S509, a difference of "remaining token number" minus "token bucket capacity" is calculated to obtain a supplementary capacity.

In operation S510, the token is replenished, and the last replenishment time value is updated to the current time.

In operation S511, the tokens required for the request are deducted from the token bucket,

in operation S512, a token required for the request is returned.

Fig. 6 schematically illustrates a schematic diagram of a token bucket-based throttling method according to another embodiment of the present disclosure. As shown in fig. 6, the method may include operations S601 to S618.

Specifically, in operation S601, a request is made to enter a current limiter, and a token is applied to the current limiter.

In operation S602, the number of remaining tokens in the token bucket is obtained.

In operation S603, it is determined whether the number of remaining tokens is sufficient. If the number of remaining tokens is sufficient, operation S617 is performed. If the number of remaining tokens is insufficient, operation S604 is performed.

In operation S604, the time of the last supplementary token is acquired. Note that, at this time, the value of "time of last token production" is the time of the last token production closest to the current time.

In operation S605, a number of replenishable tokens is calculated according to the time of the last replenishing token. The calculation formula is that the number of the supplementary tokens is (current time-time of last supplementary token)/time interval of adding the tokens is the number of single-time production tokens.

In operation S606, it is determined whether the number of the supplementable tokens is greater than 0, if the supplementable tokens is greater than 0, operation S613 is performed, otherwise, operation S607 is performed.

In operation S607, the last credit token time is obtained, and it should be noted that the value of the last credit token time is the time that the credit token operation is triggered by the time closest to the current time.

In operation S608, it is determined whether the credit condition is satisfied. If not, operation S612 is performed. If yes, operations S609-S611 are performed. The credit condition may include, for example: the sum of the accumulated number of token credits and the number of tokens required for pending requests is less than the number of single-time production tokens, and the sum of the time of the last token credit and the time interval of token credit is less than the time of the last production token.

In operation S609, the token bucket is replenished with a credit amount of tokens.

In operation S610, credit tokens are accumulated.

In operation S611, the last credit token time is updated to the current time, and then operation S617 is performed.

In operation S612, no token is returned to reject the request.

In operation S613, a difference between the number of replenishable tokens and the accumulated credit token amount is calculated.

In operation S614, it is determined whether the difference is greater than 0, and if the difference is greater than 0, S615 is performed. If the difference is less than or equal to 0, operation S612 is performed.

In operation S615, the difference number of tokens is replenished into the token bucket and the credit token number is emptied to complete the debt.

In operation S616, the last supplementary token time is updated to the current time.

In operation S617, the number of tokens required for the request is decremented.

In operation S618, a token requesting allocation of a required number is returned.

As shown in fig. 5 and 6, the embodiments of the present disclosure provide two solutions, one based on a generic version of the token bucket (fig. 5) and one based on a token bucket that can be credited (fig. 6). In the process of realizing the token bucket algorithm, the concept of 'lazy loading' is adopted, when the tokens are insufficient, the adding mechanism of the token bucket is triggered, the current limiter assembly is called to execute corresponding current limiting operation, and no token production node is required to be additionally arranged, so that the number of nodes of the system can be reduced, and the reliability is improved.

Fig. 7 schematically illustrates a block diagram of a token bucket based throttling apparatus according to an embodiment of the present disclosure.

As shown in fig. 7, the token bucket based throttling apparatus 700 includes a receiving module 710, an obtaining module 720, a first determining module 730, an adding module 740, and a passing module 750.

The receiving module 710 may be configured to receive a pending request.

An obtaining module 720 may be configured to obtain the number of tokens in the token bucket.

The first determining module 730 may be configured to determine the target token number according to the time of last token production and the current time when the number of tokens in the token bucket is smaller than the number of tokens required for the request to be processed.

An adding module 740 may be configured to add tokens of the target token count to the token bucket.

The pass module 750 may be configured to determine whether to allow the pending request to pass according to the number of tokens in the token bucket after the adding operation is performed.

According to an embodiment of the present disclosure, the first determining module may include, for example: a second determining sub-module may be configured to determine a time difference between the time of last production of the token and the current time. And the third determining submodule can be used for determining the token production times according to the time difference. And the fourth determining submodule can be used for determining the target token number according to the token production times.

According to the current limiting device based on the token bucket, under the condition that the tokens in the token bucket are insufficient, the adding operation of the token bucket can be executed, and a timing task is not required to be additionally arranged for adding the tokens at regular time. Therefore, the method can be executed by the current limiting device (such as a current limiter) without additionally arranging a token production node, so that the number of nodes of the system can be reduced, and the reliability is improved.

Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.

For example, any number of the receiving module 710, the obtaining module 720, the first determining module 730, the adding module 740, and the passing module 750 may be combined in one module to be implemented, or any one of the modules may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the receiving module 710, the obtaining module 720, the first determining module 730, the adding module 740, and the passing module 750 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementations of software, hardware, and firmware, or implemented by a suitable combination of any several of them. Alternatively, at least one of the receiving module 710, the obtaining module 720, the first determining module 730, the adding module 740 and the passing module 750 may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.

FIG. 8 schematically illustrates a block diagram of a computer system suitable for implementing the above-described method according to an embodiment of the present disclosure. The computer system illustrated in FIG. 8 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.

As shown in fig. 8, a computer system 800 according to an embodiment of the present disclosure includes a processor 801 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. The processor 801 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 801 may also include onboard memory for caching purposes. The processor 801 may include a single processing unit or multiple processing units for performing different actions of the method flows according to embodiments of the present disclosure.

In the RAM 803, various programs and data necessary for the operation of the system 800 are stored. The processor 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. The processor 801 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 802 and/or RAM 803. Note that the programs may also be stored in one or more memories other than the ROM 802 and RAM 803. The processor 801 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.

System 800 may also include an input/output (I/O) interface 805, also connected to bus 804, according to an embodiment of the disclosure. The system 800 may also include one or more of the following components connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a signal such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.

According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program, when executed by the processor 801, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.

The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.

According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 802 and/or RAM 803 described above and/or one or more memories other than the ROM 802 and RAM 803.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.

The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于数据特征挖掘的网络传输状态智能监测系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!