Method and device for caching data

文档序号:378688 发布日期:2021-12-10 浏览:36次 中文

阅读说明:本技术 一种缓存数据的方法和装置 (Method and device for caching data ) 是由 王秋晓 于 2020-09-28 设计创作,主要内容包括:本发明公开了一种缓存数据的方法和装置,涉及计算机技术领域。该方法的一具体实施方式包括:接收并解析业务请求,得到业务编码和请求内容参数;根据所述业务编码和所述请求内容参数,从缓存中获取业务数据或者通过调用所述业务编码对应的接口逻辑以获取业务数据,并返回所述业务数据;根据所述业务请求和所述业务请求的处理时长生成消息,通过消费所述消息获得所述业务请求在预设时间窗口内的请求量和/或平均处理时长;判断所述业务请求在预设时间窗口内的请求量和/或平均处理时长是否满足预设业务规则,若是,则异步调用所述业务编码对应的接口逻辑以获取业务数据,并将所述业务数据存储到所述缓存中。该实施方式能够解决现有技术只能静态地实现缓存的技术问题。(The invention discloses a method and a device for caching data, and relates to the technical field of computers. One embodiment of the method comprises: receiving and analyzing a service request to obtain a service code and a request content parameter; acquiring service data from a cache or acquiring the service data by calling an interface logic corresponding to the service code according to the service code and the request content parameter, and returning the service data; generating a message according to the service request and the processing time length of the service request, and acquiring the request amount and/or the average processing time length of the service request in a preset time window by consuming the message; and judging whether the request quantity and/or the average processing time length of the service request in a preset time window meet a preset service rule, if so, asynchronously calling an interface logic corresponding to the service code to acquire service data, and storing the service data into the cache. The implementation method can solve the technical problem that the prior art can only realize caching statically.)

1. A method of caching data, comprising:

receiving and analyzing a service request to obtain a service code and a request content parameter;

acquiring service data from a cache or acquiring the service data by calling an interface logic corresponding to the service code according to the service code and the request content parameter;

generating a message according to the service request and the processing time length of the service request, and acquiring the request amount and/or the average processing time length of the service request in a preset time window by consuming the message;

and judging whether the request quantity and/or the average processing time length of the service request in a preset time window meet a preset service rule, if so, asynchronously calling an interface logic corresponding to the service code to acquire service data, and storing the service data into the cache.

2. The method of claim 1, wherein obtaining service data from a cache or by calling an interface logic corresponding to the service code according to the service code and the request content parameter comprises:

generating a service request identifier according to the service code and the request content parameter;

judging whether the service request identification exists in the cache;

if yes, acquiring service data corresponding to the service request identification from the cache;

if not, calling an interface logic corresponding to the service code according to the request content parameter to acquire service data.

3. The method according to claim 1 or 2, wherein generating a service request identifier according to the service code and the request content parameter comprises:

splicing the service code and the request content parameter into service request information;

and calculating the abstract value of the service request information by adopting an information abstract algorithm, and taking the abstract value as a service request identifier.

4. The method of claim 1, wherein the content of the message comprises the service code, the request content parameter, the service request identifier, a generation timestamp of the message, and a processing duration of the service request.

5. The method of claim 4, wherein obtaining the request amount and/or the average processing time of the service request within a preset time window by consuming the message comprises:

sending the message to message middleware;

acquiring a message from the message middleware and consuming the message;

according to the generation time stamp of the message, putting the message into a corresponding time window;

when the end time of the time window is reached, calculating the request quantity and/or the average processing time length corresponding to the service request identification in the time window;

and writing the service request identifier corresponding to the service request, the service code, the request content parameter, the request quantity corresponding to the service request identifier and/or the average processing time length into an index table.

6. The method according to claim 5, wherein it is determined whether the request volume and/or the average processing time of the service request within a preset time window satisfy a preset service rule; if yes, asynchronously calling an interface logic corresponding to the service code to acquire service data, and storing the service data into the cache, including:

the service request meeting the preset service rule is obtained from the index table in a timing manner;

splicing the request content parameters into a calling task, executing the calling task to call an interface logic corresponding to the service code so as to obtain service data;

and storing the service request identification and the service data corresponding to the service request identification into the cache.

7. The method of claim 6, wherein storing the service request identifier and the service data corresponding to the service request identifier in the cache comprises:

generating a corresponding cache identifier according to the service request identifier;

storing the cache identifier and the service data corresponding to the service request identifier into the cache, and setting an effective period of the cache identifier; wherein the effective period of the cache identification is 1-5 times of the timing period.

8. An apparatus for caching data, comprising:

the receiving module is used for receiving and analyzing the service request to obtain a service code and a request content parameter;

the processing module is used for acquiring service data from a cache or acquiring the service data by calling an interface logic corresponding to the service code according to the service code and the request content parameter, and returning the service data;

the calculation module is used for generating a message according to the service request and the processing time length of the service request, and acquiring the request quantity and/or the average processing time length of the service request in a preset time window by consuming the message;

and the cache module is used for judging whether the request quantity and/or the average processing time length of the service request in a preset time window meet a preset service rule, if so, asynchronously calling an interface logic corresponding to the service code to acquire service data, and storing the service data into the cache.

9. An electronic device, comprising:

one or more processors;

a storage device for storing one or more programs,

the one or more programs, when executed by the one or more processors, implement the method of any of claims 1-7.

10. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-7.

Technical Field

The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for caching data.

Background

With the development of services, the systems and the systems are selected to interact through interfaces, and when the traffic reaches a certain level, in order to improve the experience of the systems, the performance of the interfaces is often improved by selecting a cache mode.

In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:

in the prior art, whether the cache needs to be written by an asynchronous call interface or an interface with fixed query participation needs to be written by an asynchronous call interface is generally determined according to business experience. Therefore, the prior art can only implement caching statically, which results in that the service requirements cannot be met in a high-concurrency scenario.

Disclosure of Invention

In view of this, embodiments of the present invention provide a method and an apparatus for caching data, so as to solve the technical problem that the prior art can only implement caching statically.

To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a method for caching data, including:

receiving and analyzing a service request to obtain a service code and a request content parameter;

acquiring service data from a cache or acquiring the service data by calling an interface logic corresponding to the service code according to the service code and the request content parameter, and returning the service data;

generating a message according to the service request and the processing time length of the service request, and acquiring the request amount and/or the average processing time length of the service request in a preset time window by consuming the message;

and judging whether the request quantity and/or the average processing time length of the service request in a preset time window meet a preset service rule, if so, asynchronously calling an interface logic corresponding to the service code to acquire service data, and storing the service data into the cache.

Optionally, obtaining service data from a cache according to the service code and the request content parameter, or obtaining service data by calling an interface logic corresponding to the service code, includes:

generating a service request identifier according to the service code and the request content parameter;

judging whether the service request identification exists in the cache;

if yes, acquiring service data corresponding to the service request identification from the cache;

if not, calling an interface logic corresponding to the service code according to the request content parameter to acquire service data.

Optionally, generating a service request identifier according to the service code and the request content parameter includes:

splicing the service code and the request content parameter into service request information;

and calculating the abstract value of the service request information by adopting an information abstract algorithm, and taking the abstract value as a service request identifier.

Optionally, the content of the message includes the service code, the request content parameter, the service request identifier, the generation timestamp of the message, and the processing duration of the service request.

Optionally, the obtaining, by consuming the message, a request amount and/or an average processing duration of the service request within a preset time window includes:

sending the message to message middleware;

acquiring a message from the message middleware and consuming the message;

according to the generation time stamp of the message, putting the message into a corresponding time window;

when the end time of the time window is reached, calculating the request quantity and/or the average processing time length corresponding to the service request identification in the time window;

and writing the service request identifier corresponding to the service request, the service code, the request content parameter, the request quantity corresponding to the service request identifier and/or the average processing time length into an index table.

Optionally, judging whether the request amount and/or the average processing time length of the service request in a preset time window meet a preset service rule; if yes, asynchronously calling an interface logic corresponding to the service code to acquire service data, and storing the service data into the cache, including:

the service request meeting the preset service rule is obtained from the index table in a timing manner;

splicing the request content parameters into a calling task, executing the calling task to call an interface logic corresponding to the service code so as to obtain service data;

and storing the service request identification and the service data corresponding to the service request identification into the cache.

Optionally, storing the service request identifier and the service data corresponding to the service request identifier in the cache includes:

generating a corresponding cache identifier according to the service request identifier;

storing the cache identifier and the service data corresponding to the service request identifier into the cache, and setting an effective period of the cache identifier; wherein the effective period of the cache identification is 1-5 times of the timing period.

In addition, according to another aspect of the embodiments of the present invention, there is provided an apparatus for caching data, including:

the receiving module is used for receiving and analyzing the service request to obtain a service code and a request content parameter;

the processing module is used for acquiring service data from a cache or acquiring the service data by calling an interface logic corresponding to the service code according to the service code and the request content parameter, and returning the service data;

the calculation module is used for generating a message according to the service request and the processing time length of the service request, and acquiring the request quantity and/or the average processing time length of the service request in a preset time window by consuming the message;

and the cache module is used for judging whether the request quantity and/or the average processing time length of the service request in a preset time window meet a preset service rule, if so, asynchronously calling an interface logic corresponding to the service code to acquire service data, and storing the service data into the cache.

Optionally, the processing module is further configured to:

generating a service request identifier according to the service code and the request content parameter;

judging whether the service request identification exists in the cache;

if yes, acquiring service data corresponding to the service request identification from the cache;

if not, calling an interface logic corresponding to the service code according to the request content parameter to acquire service data.

Optionally, the processing module is further configured to:

splicing the service code and the request content parameter into service request information;

and calculating the abstract value of the service request information by adopting an information abstract algorithm, and taking the abstract value as a service request identifier.

Optionally, the content of the message includes the service code, the request content parameter, the service request identifier, the generation timestamp of the message, and the processing duration of the service request.

Optionally, the computing module is further configured to:

sending the message to message middleware;

acquiring a message from the message middleware and consuming the message;

according to the generation time stamp of the message, putting the message into a corresponding time window;

when the end time of the time window is reached, calculating the request quantity and/or the average processing time length corresponding to the service request identification in the time window;

and writing the service request identifier corresponding to the service request, the service code, the request content parameter, the request quantity corresponding to the service request identifier and/or the average processing time length into an index table.

Optionally, the cache module is further configured to:

the service request meeting the preset service rule is obtained from the index table in a timing manner;

splicing the request content parameters into a calling task, executing the calling task to call an interface logic corresponding to the service code so as to obtain service data;

and storing the service request identification and the service data corresponding to the service request identification into the cache.

Optionally, the cache module is further configured to:

generating a corresponding cache identifier according to the service request identifier;

storing the cache identifier and the service data corresponding to the service request identifier into the cache, and setting an effective period of the cache identifier; wherein the effective period of the cache identification is 1-5 times of the timing period.

According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including:

one or more processors;

a storage device for storing one or more programs,

when the one or more programs are executed by the one or more processors, the one or more processors implement the method of any of the embodiments described above.

According to another aspect of the embodiments of the present invention, there is also provided a computer readable medium, on which a computer program is stored, which when executed by a processor implements the method of any of the above embodiments.

One embodiment of the above invention has the following advantages or benefits: because the technical means that if the request quantity and/or the average processing time length of the service request in the preset time window meet the preset service rule, the interface logic corresponding to the service code is called to acquire the service data and store the service data into the cache is adopted, the technical problem that the cache can only be statically realized in the prior art is solved. The embodiment of the invention judges whether to cache the service data corresponding to the service request or not by calculating the request quantity and/or the average processing time length of each service request in the preset time window, thereby dynamically realizing caching and ensuring that the service requirement is met in a high-concurrency scene.

Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.

Drawings

The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:

FIG. 1 is a schematic diagram of a main flow of a method of caching data according to an embodiment of the invention;

FIG. 2 is a diagram illustrating a main flow of a method for caching data according to a referential embodiment of the present invention;

FIG. 3 is a diagram illustrating a main flow of a method of caching data according to another referential embodiment of the present invention;

FIG. 4 is a diagram illustrating a main flow of a method of caching data according to still another embodiment of the present invention;

FIG. 5 is a schematic diagram of the main blocks of an apparatus for caching data according to an embodiment of the present invention;

FIG. 6 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;

fig. 7 is a schematic block diagram of a computer system suitable for use in implementing a terminal device or server of an embodiment of the invention.

Detailed Description

Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

Fig. 1 is a schematic diagram of a main flow of a method for caching data according to an embodiment of the present invention. As an embodiment of the present invention, as shown in fig. 1, the method for caching data may include:

step 101, receiving and analyzing a service request to obtain a service code and a request content parameter.

And the server receives and analyzes the service request sent by the client, so as to obtain the service code and the request content parameter. In the embodiment of the present invention, the request content parameter is formed by splicing query conditions, for example, in a service scenario of express query, the request content parameter includes a mobile phone number and a time period, and for example, in a service scenario of weather query, the request content parameter includes a city name and a time period.

And step 102, acquiring service data from a cache or acquiring the service data by calling an interface logic corresponding to the service code according to the service code and the request content parameter, and returning the service data.

After the service code and the request content parameter are obtained through analysis, corresponding service data are obtained from a cache according to the service code and the request content parameter, or corresponding service data are obtained by calling an interface logic corresponding to the service code, and then the service data are returned to the client.

Optionally, obtaining service data from a cache according to the service code and the request content parameter, or obtaining service data by calling an interface logic corresponding to the service code, includes: generating a service request identifier according to the service code and the request content parameter; judging whether the service request identification exists in the cache; if yes, acquiring service data corresponding to the service request identification from the cache; if not, calling an interface logic corresponding to the service code according to the request content parameter to acquire service data. In order to acquire service data quickly, whether the service data exists in the cache or not is judged firstly, if yes, the service data is acquired from the cache directly, and if not, the service data is acquired by calling interface logic. In the cache, the service request identifier and the service data are stored in a key-value pair manner, so that whether the corresponding service data exists can be judged by judging whether the service request identifier exists in the cache.

Optionally, generating a service request identifier according to the service code and the request content parameter includes: splicing the service code and the request content parameter into service request information; and calculating the abstract value of the service request information by adopting an information abstract algorithm, and taking the abstract value as a service request identifier. In order to generate a unique service request identifier (busi _ key), in the embodiment of the present invention, the service code and the request content parameter are first spliced into service request information, and then an information digest algorithm is used to calculate a digest value of the service request information (i.e., a service request identifier). The service request identification is calculated by adopting the information abstract algorithm, so that the uniqueness of the service request identification can be ensured, the service requests with the same service codes and request contents can be ensured, the service request identifications can be also same so as to identify the same service request, and the request quantity and/or the average processing time of the service requests can be conveniently counted in the subsequent steps. Alternatively, MD5(Message-digest algorithm 5) may be used to compute the service request identification to identify the same service request.

Optionally, the splicing the service code and the request content parameter into service request information includes: and acquiring an interface name corresponding to the service code, and splicing the interface name and the request content parameter into service request information. In the embodiment, the interface name and the request parameter content are spliced into the service request information, then the abstract value of the service request information is calculated by adopting an information abstract algorithm, and the abstract value is used as the service request identifier. The embodiment of the present invention is not limited to this, regardless of the service code or the interface name.

Step 103, generating a message according to the service request and the processing duration of the service request, and acquiring the request amount and/or the average processing duration of the service request in a preset time window by consuming the message.

And for each service request, after service data is returned to the client, the server generates a message according to the service request and the processing time length of the service request, and then the request amount and/or the average processing time length of the service request in a preset time window are obtained by consuming the message.

Optionally, the content of the message includes the service code, the request content parameter, the service request identifier, the generation timestamp of the message, and the processing duration of the service request. In the embodiment of the present invention, the content of the message may include a service code, a request content parameter, a service request identifier, a timestamp for generating the message, and a processing duration of the service request. It should be noted that the processing duration of the service request includes both the duration of obtaining the service data from the cache and the duration of obtaining the service data by calling the interface logic. Optionally, the content of the message may further include a service description, which is used to indicate whether the service data corresponding to the service request is obtained from the cache or obtained by invoking the interface logic.

Optionally, the obtaining, by consuming the message, a request amount and/or an average processing duration of the service request within a preset time window includes: sending the message to message middleware; acquiring a message from the message middleware and consuming the message; according to the generation time stamp of the message, putting the message into a corresponding time window; when the end time of the time window is reached, calculating the request quantity and/or the average processing time length corresponding to the service request identification in the time window; and writing the service request identifier corresponding to the service request, the service code, the request content parameter, the request quantity corresponding to the service request identifier and/or the average processing time length into an index table. In an embodiment of the present invention, a producer of a message may send the message to the MQ message middleware, and a stream processing framework (such as Flink) acquires the message from the message middleware, consumes the message, and calculates the request amount and/or the average processing time length of each service request in each time window in real time through the stream processing framework.

Specifically, the stream processing framework consumes the message, thereby obtaining the service code, the request content parameter, the service request identifier (busi _ key), the generation timestamp of the message, and the processing duration of the service request. And then, the stream processing framework puts the message into a corresponding time window according to the generation time stamp of the message. And calculating the request quantity and/or the average processing time length corresponding to each busi _ key in the time window every time the end time of the time window is reached, namely counting the request quantity and/or the average processing time length of each busi _ key in the time window by taking the busi _ key as a dimension. And finally, the stream processing framework writes the busi _ key, the service code, the request content parameter, the request amount and/or the average processing time length corresponding to each service request into an index table of a storage engine. Therefore, the index table stores the service code, the request content parameter, the request amount and/or the average processing time length corresponding to each busi _ key with the busi _ key as the dimension. Optionally, the total processing time length corresponding to each busi _ key may also be stored.

Taking a time window of 1 minute as an example, the service code, the request content parameter, the total processing time, the request amount and the average processing time of each busi _ key in the time window are counted. The total processing time length is the sum of the processing time lengths of the service requests corresponding to the same busi _ key, and the average processing time length is the sum of the processing time lengths divided by the request amount. For example, in a certain time window, if the request amount of a certain busi _ key is 50, the processing time lengths of the 50 service requests are added to be used as the total processing time length, and the average processing time length is obtained by dividing the total processing time length by 50.

Optionally, in the embodiment of the present invention, the busi _ key, the service code, the request content parameter, the request amount corresponding to the busi _ key, and/or the average processing time length corresponding to each service request are written into the index table of the ElasticSearch in real time, the ElasticSearch may implement quasi-real-time data access (second-level delay), and the ElasticSearch may implement second-level query statistics on billions of data, so that the performance is higher. Alternatively, doris, clickhouse, etc. may be used as the storage engine instead of the ElasticSearch.

And 104, judging whether the request quantity and/or the average processing time length of the service request in a preset time window meet a preset service rule, if so, executing the step 105, and if not, ending the step.

In the embodiment of the present invention, the business rule may be preset, for example, 100 before the request amount, or 100 before the processing time length, or the request amount is greater than 10 pieces/second, or the processing time length is greater than 5 seconds, and so on.

And 105, asynchronously calling an interface logic corresponding to the service code to acquire service data, and storing the service data into the cache.

In order to dynamically implement caching, the embodiment of the present invention determines whether to store the service data corresponding to the service request in the cache based on the request amount and/or the average processing time of the service request within a preset time window. Since the request amount and/or the average processing time of the service request in the preset time window are dynamic, the service data written into the cache is also dynamic.

After the time of a time window (the time window can be divided according to hours or minutes), taking a service request identifier (busi _ key) as a dimension, counting the request quantity and/or the average processing time length of each busi _ key in the time window, and judging whether the request quantity and/or the average processing time length in the time window meet a preset service rule or not for each busi _ key; if yes, asynchronously calling a corresponding interface logic to obtain service data, and storing the service data into the cache.

Optionally, judging whether the request amount and/or the average processing time length of the service request in a preset time window meet a preset service rule; if yes, asynchronously calling an interface logic corresponding to the service code to acquire service data, and storing the service data into the cache, including: the service request meeting the preset service rule is obtained from the index table in a timing manner; splicing the request content parameters into a calling task, executing the calling task to call an interface logic corresponding to the service code so as to obtain service data; and storing the service request identification and the service data corresponding to the service request identification into the cache. Each service request meeting the preset service rule can be obtained from the index table by a timing task at regular time (generally, the timing task is executed once in 10 seconds to 1 minute, and the time is configurable), the content parameters of the requests are spliced into a calling task aiming at each obtained service request, and then the calling task is executed to call the interface logic corresponding to each service code, so that the service data corresponding to each service request identifier is obtained; and finally, serializing each service request identifier and the corresponding service data thereof and storing the serialized service request identifiers and the corresponding service data into a cache.

Optionally, storing the service request identifier and the service data corresponding to the service request identifier in the cache includes: generating a corresponding cache identifier according to the service request identifier; storing the cache identifier and the service data corresponding to the service request identifier into the cache, and setting an effective period of the cache identifier; wherein the effective period of the cache identification is 1-5 times of the timing period. Optionally, the cache identifier redis _ key corresponding to the service request identifier busi _ key may be generated according to a certain rule, for example, identifiers are added before and/or after the busi _ key, one busi _ key corresponds to one redis _ key, and the two are in a one-to-one correspondence relationship.

In order to prevent the redis _ key from being unable to be updated in real time, the embodiment of the present invention sets an effective period for each cache identifier. If the effective period of the cache identifier is too short, the cache identifier is easy to penetrate, and is too long, the service data cannot be guaranteed to be latest, and optionally, the effective period of the cache identifier is 1.5 times, 2 times, 2.5 times or 4 times of the timing period, so that the cache cannot be penetrated, and the service data in the cache can be guaranteed to be latest.

The redis _ key can automatically fail after the effective period is exceeded, and then the real-time data is inquired by calling the interface when the service requests again, so that the service data acquired by the user can be ensured to be latest. For example, a busi _ key is 100 times before the request amount, but it is not after a while, at this time, the timing task will not update the service data corresponding to the redis _ key, but the redis _ key is still stored in the redis, so that the service data that is found again by the user may be a long-time previous service data. If the valid period is set, the redis _ key is automatically disabled, so that the user requests again to debug the interface, thereby acquiring the latest service data.

It should be noted that, if the redis _ key and the corresponding service data are stored in the cache, when querying the cache in step 102, the busi _ key needs to be converted into the redis _ key, and the corresponding service data is searched through the redis _ key.

According to the various embodiments described above, it can be seen that the technical means of calling the interface logic corresponding to the service code to acquire the service data and storing the service data in the cache according to the embodiments of the present invention solves the technical problem that only the cache can be statically implemented in the prior art by calling the interface logic corresponding to the service code if the request amount and/or the average processing time of the service request in the preset time window satisfy the preset service rule. The embodiment of the invention judges whether to cache the service data corresponding to the service request or not by calculating the request quantity and/or the average processing time length of each service request in the preset time window, thereby dynamically realizing caching and ensuring that the service requirement is met in a high-concurrency scene.

Fig. 2 is a schematic diagram of a main flow of a method for caching data according to a referential embodiment of the present invention. As another embodiment of the present invention, as shown in fig. 2, the method for caching data may include:

step 201, receiving and analyzing a service request to obtain a service code and a request content parameter.

Step 202, the service code and the request content parameter are spliced into service request information.

Step 203, calculating the abstract value of the service request information by using an information abstract algorithm, and using the abstract value as a service request identifier.

The service request identification is calculated by adopting the information abstract algorithm, so that the uniqueness of the service request identification can be ensured, the service requests with the same service codes and request contents can be ensured, the service request identifications can be also same so as to identify the same service request, and the request quantity and/or the average processing time of the service requests can be conveniently counted in the subsequent steps.

Step 204, judging whether the service request identification exists in the cache; if yes, go to step 205; if not, go to step 206.

Step 205, obtaining the service data corresponding to the service request identifier from the cache.

And step 206, calling an interface logic corresponding to the service code according to the request content parameter to acquire service data.

In order to acquire service data quickly, whether the service data exists in the cache or not is judged firstly, if yes, the service data is acquired from the cache directly, and if not, the service data is acquired by calling interface logic. In the cache, the service request identifier and the service data are stored in a key-value pair manner, so that whether the corresponding service data exists can be judged by judging whether the service request identifier exists in the cache.

Step 207, generating a message according to the service request and the processing duration of the service request.

And for each service request, after service data is returned to the client, the server generates a message according to the service request and the processing time length of the service request, wherein the content of the message comprises the service code, the request content parameter, the service request identifier, the generation timestamp of the message and the processing time length of the service request.

And step 208, acquiring the request quantity and/or the average processing time length of the service request in a preset time window by consuming the message.

And the stream processing framework consumes the message so as to obtain the service code, the request content parameter, the service request identifier (busi _ key), the generation timestamp of the message and the processing time length of the service request, and then calculates the request amount and/or the average processing time length corresponding to each busi _ key in a preset time window.

Step 209, determining whether the request amount and/or the average processing time of the service request in a preset time window meets a preset service rule; if yes, go to step 210; if not, the process is ended.

Step 210, calling an interface logic corresponding to the service code to obtain service data, and storing the service data in the cache.

In addition, in a reference embodiment of the present invention, the detailed implementation of the method for caching data is described in detail above, so that the repeated content will not be described again.

Fig. 3 is a schematic diagram of a main flow of a method of caching data according to another referential embodiment of the present invention. As another embodiment of the present invention, as shown in fig. 3, the method for caching data may include:

step 301, receiving and analyzing the service request to obtain the service code and the request content parameter.

Step 302, generating a service request identifier according to the service code and the request content parameter.

Step 303, judging whether the service request identifier exists in the cache; if yes, go to step 304; if not, go to step 305.

Step 304, obtaining the service data corresponding to the service request identifier from the cache.

Step 305, calling an interface logic corresponding to the service code according to the request content parameter to acquire service data.

Step 306, generating a message according to the service request and the processing duration of the service request, and sending the message to a message middleware.

And for each service request, after service data is returned to the client, the server generates a message according to the service request and the processing time length of the service request, wherein the content of the message comprises the service code, the request content parameter, the service request identifier, the generation timestamp of the message and the processing time length of the service request.

Step 307, obtaining the message from the message middleware and consuming the message.

And the stream processing framework acquires the message from the message middleware and consumes the message, so as to acquire the service code, the request content parameter, the service request identifier (busi _ key), the generation timestamp of the message and the processing time length of the service request. And then, the stream processing framework puts the message into a corresponding time window according to the generation time stamp of the message.

And 308, putting the message into a corresponding time window according to the generated time stamp of the message.

And the stream processing framework puts the message into a corresponding time window according to the generation time stamp of the message.

Step 309, when the end time of the time window is reached, calculating the request quantity and/or the average processing time length corresponding to the service request identifier in the time window.

And each time the end time of the time window is reached, the stream processing framework calculates the request amount and/or the average processing time length corresponding to each busi _ key in the time window, namely, taking the busi _ key as a dimension, and counts the request amount and/or the average processing time length of each busi _ key in the time window. The total processing time length is the sum of the processing time lengths of the service requests corresponding to the same busi _ key, and the average processing time length is the sum of the processing time lengths divided by the request amount.

Step 310, writing the service request identifier, the service code, the request content parameter, the request amount and/or the average processing duration corresponding to the service request identifier corresponding to the service request into an index table.

And finally, the stream processing framework writes the busi _ key, the service code, the request content parameter, the request amount and/or the average processing time length corresponding to each service request into an index table of a storage engine.

Step 311, the service request meeting the preset service rule is obtained from the index table in a timing manner.

And the timing task acquires the service request meeting the preset service rule from the index table at regular time.

And step 312, splicing the request content parameters into a call task, and executing the call task to call an interface logic corresponding to the service code, so as to obtain service data.

And splicing the request content parameters into a calling task aiming at each acquired service request, and then executing the calling task to call the interface logic corresponding to each service code so as to acquire the service data corresponding to each service request identifier.

Step 313, storing the service request identifier and the service data corresponding to the service request identifier in the cache.

In addition, in another embodiment of the present invention, the detailed implementation of the method for caching data is described in detail above, so that the repeated description is not repeated here.

Fig. 4 is a schematic diagram of a main flow of a method of caching data according to still another referential embodiment of the present invention. As still another embodiment of the present invention, as shown in fig. 4, the method for caching data may include:

the interface server receives and analyzes the service request sent by the client to obtain a service code, a request content parameter and a cache description (used for indicating whether cache is needed or not); and then generating a unique service request identifier busi _ key according to the service code and the request content parameter.

The interface server determines whether to need to go cache according to the cache description carried in the service request; if the cache needs to be moved, inquiring whether corresponding service data exists in the cache or not according to the busi _ key; if yes, returning the service data in the cache to the client; if the service data does not exist, calling an interface logic corresponding to the service code to acquire corresponding service data, and returning the service data to the client. And if the cache is not needed, directly calling the interface logic corresponding to the service code to acquire the corresponding service data and returning the service data to the client.

After the service data is returned to the client, the interface server side assembles the service code, the request content parameter, the busi _ key, the message generation timestamp and the service request processing duration into an MQ message, and then sends the message to the MQ message middleware.

And the MQ message middleware receives the message sent by the interface service terminal and then sends the message to the Flink.

And the Flink receives and consumes the message to obtain the service code, the request content parameter, the busi _ key, the generation time stamp of the message and the processing time length of the service request. The Flink puts the message into a corresponding time window according to the generation time stamp of the message; when the end time of the time window is reached, calculating the request quantity and/or the average processing time length corresponding to the service request identification in the time window; and then writing the service request identifier, the service code, the request content parameter, the request amount corresponding to the service request identifier and/or the average processing time length corresponding to the service request into an index table of ElasticSearch.

The timing task acquires a service request meeting a preset service rule from the index table in a timing mode; and respectively splicing the request content parameters into calling tasks, executing the calling tasks, and asynchronously calling interface logics corresponding to the business codes so as to obtain business data corresponding to the busi _ key.

And storing each busi _ key and the corresponding service data thereof into the cache.

In addition, in another embodiment of the present invention, the detailed implementation of the method for caching data is described in detail above, and therefore the repeated content is not described herein.

Fig. 5 is a schematic diagram of main modules of an apparatus for caching data according to an embodiment of the present invention, and as shown in fig. 5, the apparatus 500 for caching data includes a receiving module 501, a processing module 502, a calculating module 503, and a caching module 504; the receiving module 501 is configured to receive and analyze a service request, and obtain a service code and a request content parameter; the processing module 502 is configured to obtain service data from a cache according to the service code and the request content parameter, or obtain service data by calling an interface logic corresponding to the service code, and return the service data; the calculation module 503 is configured to generate a message according to the service request and the processing duration of the service request, and obtain a request amount and/or an average processing duration of the service request within a preset time window by consuming the message; the cache module 503 is configured to determine whether a request amount and/or an average processing time of the service request in a preset time window meets a preset service rule, and if so, asynchronously call an interface logic corresponding to the service code to obtain service data, and store the service data in the cache.

Optionally, the processing module 502 is further configured to:

generating a service request identifier according to the service code and the request content parameter;

judging whether the service request identification exists in the cache;

if yes, acquiring service data corresponding to the service request identification from the cache;

if not, calling an interface logic corresponding to the service code according to the request content parameter to acquire service data.

Optionally, the processing module 502 is further configured to:

splicing the service code and the request content parameter into service request information;

and calculating the abstract value of the service request information by adopting an information abstract algorithm, and taking the abstract value as a service request identifier.

Optionally, the content of the message includes the service code, the request content parameter, the service request identifier, the generation timestamp of the message, and the processing duration of the service request.

Optionally, the calculating module 503 is further configured to:

sending the message to message middleware;

acquiring a message from the message middleware and consuming the message;

according to the generation time stamp of the message, putting the message into a corresponding time window;

when the end time of the time window is reached, calculating the request quantity and/or the average processing time length corresponding to the service request identification in the time window;

and writing the service request identifier corresponding to the service request, the service code, the request content parameter, the request quantity corresponding to the service request identifier and/or the average processing time length into an index table.

Optionally, the cache module 504 is further configured to:

the service request meeting the preset service rule is obtained from the index table in a timing manner;

splicing the request content parameters into a calling task, executing the calling task to call an interface logic corresponding to the service code so as to obtain service data;

and storing the service request identification and the service data corresponding to the service request identification into the cache.

Optionally, the cache module 504 is further configured to:

generating a corresponding cache identifier according to the service request identifier;

storing the cache identifier and the service data corresponding to the service request identifier into the cache, and setting an effective period of the cache identifier; wherein the effective period of the cache identification is 1-5 times of the timing period.

According to the various embodiments described above, it can be seen that the technical means of calling the interface logic corresponding to the service code to acquire the service data and storing the service data in the cache according to the embodiments of the present invention solves the technical problem that only the cache can be statically implemented in the prior art by calling the interface logic corresponding to the service code if the request amount and/or the average processing time of the service request in the preset time window satisfy the preset service rule. The embodiment of the invention judges whether to cache the service data corresponding to the service request or not by calculating the request quantity and/or the average processing time length of each service request in the preset time window, thereby dynamically realizing caching and ensuring that the service requirement is met in a high-concurrency scene.

It should be noted that, in the implementation of the apparatus for caching data according to the present invention, the above method for caching data has been described in detail, and therefore, the repeated content is not described again.

Fig. 6 shows an exemplary system architecture 600 to which the method of caching data or the apparatus for caching data of the embodiments of the invention may be applied.

As shown in fig. 6, the system architecture 600 may include terminal devices 601, 602, 603, a network 604, and a server 605. The network 604 serves to provide a medium for communication links between the terminal devices 601, 602, 603 and the server 605. Network 604 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.

A user may use the terminal devices 601, 602, 603 to interact with the server 605 via the network 604 to receive or send messages or the like. The terminal devices 601, 602, 603 may have installed thereon various communication client applications, such as shopping applications, web browser applications, search applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).

The terminal devices 601, 602, 603 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.

The server 605 may be a server providing various services, such as a background management server (for example only) providing support for shopping websites browsed by users using the terminal devices 601, 602, 603. The background management server can analyze and process the received data such as the article information query request and feed back the processing result to the terminal equipment.

It should be noted that the method for caching data provided by the embodiment of the present invention is generally performed by the server 605, and accordingly, the apparatus for caching data is generally disposed in the server 605.

It should be understood that the number of terminal devices, networks, and servers in fig. 6 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.

Referring now to FIG. 7, shown is a block diagram of a computer system 700 suitable for use with a terminal device implementing an embodiment of the present invention. The terminal device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.

As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 701, the ROM 702, and the RAM703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.

The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.

In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program performs the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 701.

It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer programs according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a receiving module, a processing module, and a cache module, where the names of the modules do not in some cases constitute a limitation on the modules themselves.

As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, implement the method of: receiving and analyzing a service request to obtain a service code and a request content parameter; acquiring service data from a cache or acquiring the service data by calling an interface logic corresponding to the service code according to the service code and the request content parameter, and returning the service data; generating a message according to the service request and the processing time length of the service request, and acquiring the request amount and/or the average processing time length of the service request in a preset time window by consuming the message; and judging whether the request quantity and/or the average processing time length of the service request in a preset time window meet a preset service rule, if so, asynchronously calling an interface logic corresponding to the service code to acquire service data, and storing the service data into the cache.

According to the technical scheme of the embodiment of the invention, because the technical means that if the request quantity and/or the average processing time length of the service request in the preset time window meet the preset service rule, the interface logic corresponding to the service code is called to acquire the service data and store the service data into the cache is adopted, the technical problem that the cache can only be statically realized in the prior art is solved. The embodiment of the invention judges whether to cache the service data corresponding to the service request or not by calculating the request quantity and/or the average processing time length of each service request in the preset time window, thereby dynamically realizing caching and ensuring that the service requirement is met in a high-concurrency scene.

The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

21页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种数据缓存的处理方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类