Cache processing method, electronic equipment and computer storage medium

文档序号:1952525 发布日期:2021-12-10 浏览:16次 中文

阅读说明:本技术 一种缓存的处理方法、电子设备及计算机存储介质 (Cache processing method, electronic equipment and computer storage medium ) 是由 张中杰 于 2021-09-14 设计创作,主要内容包括:本申请实施例公开了一种缓存的处理方法,包括:获取针对待操作缓存的CRUD操作,从预设的缓存工厂中获取为待操作缓存的缓存数据生成的出入规则,根据出入规则,从待操作缓存中确定CRUD操作的操作位置,根据CRUD操作和操作位置,处理待操作缓存。本申请实施例还同时提供了一种电子设备及计算机存储介质。(The embodiment of the application discloses a cache processing method, which comprises the following steps: the CRUD operation aiming at the cache to be operated is obtained, an access rule generated for the cache data of the cache to be operated is obtained from a preset cache factory, the operation position of the CRUD operation is determined from the cache to be operated according to the access rule, and the cache to be operated is processed according to the CRUD operation and the operation position. The embodiment of the application also provides the electronic equipment and the computer storage medium.)

1. A method for processing a cache, comprising:

obtaining CRUD operation aiming at the cache to be operated;

obtaining an access rule generated for the cache data of the cache to be operated from a preset cache factory;

determining an operation position of the CRUD operation from the cache to be operated according to the access rule;

and processing the cache to be operated according to the CRUD operation and the operation position.

2. The method of claim 1, wherein obtaining the CRUD operation for the cache to be operated on comprises:

and acquiring the CRUD operation aiming at the cache to be operated through the interface of the CRUD operation.

3. The method of claim 1, further comprising:

determining the type of the cache to be operated;

determining an access rule corresponding to the type of the cache to be operated according to the corresponding relation between the type of the preset cache and the access rule;

and determining the access rule corresponding to the type of the cache to be operated as the access rule of the cache data of the cache to be operated.

4. The method of claim 1, further comprising:

determining the type of the cache to be operated;

determining an elimination strategy corresponding to the type of the cache to be operated according to the corresponding relation between the preset type of the cache and the elimination strategy;

and determining the access rule corresponding to the type of the cache to be operated as an elimination strategy of the cache data of the cache to be operated.

5. The method of claim 1, further comprising:

determining the type of the cache to be operated;

determining an exception handling strategy corresponding to the type of the cache to be operated according to the corresponding relation between the type of the preset cache and the exception handling strategy;

and determining the exception handling strategy corresponding to the type of the cache to be operated as the exception handling strategy of the cache data of the cache to be operated.

6. The method according to any of claims 3 to 5, wherein the determining the type of the cache to be operated comprises:

when the cache to be operated is used for storing data in a memory of a CPU, determining the type of the cache to be operated as the CPU cache;

when the cache to be operated is used for storing data in a browser, determining the type of the cache to be operated as the browser cache;

when the cache to be operated is used for storing data in the CDN, determining that the type of the cache to be operated is the CDN cache;

and when the cache to be operated is used for storing data in a database, determining the type of the cache to be operated as the database cache.

7. The method according to claim 1, wherein said processing said cache to be operated according to said CRUD operation and said operation location comprises:

adding the insertion data to the operation location when the CRUD operation is an insertion operation of insertion data;

and when the CRUD operation is a deleting operation, deleting the cache data on the operation position.

8. The method of claim 1, further comprising:

capturing the operation information of the cache to be operated by using a tangent plane technology;

when the operation information comprises operation information of a specific operation, generating first prompt information; the first prompt message is used for prompting that the cache to be operated has a specific operation;

when the operation information contains information for indicating that a specific abnormity occurs, generating second prompt information; and the second prompt message is used for prompting that the cache to be operated has specific abnormity.

9. An electronic device, comprising:

the first acquisition module is used for acquiring CRUD operation aiming at the cache to be operated;

the second obtaining module is used for obtaining an access rule generated for the cache data of the cache to be operated from a preset cache factory;

a determining module, configured to determine, according to the access rule, an operation location of the CRUD operation from the cache to be operated;

and the processing module is used for processing the cache to be operated according to the CRUD operation and the operation position.

10. An electronic device, comprising: a processor and a storage medium storing instructions executable by the processor, the storage medium performing operations in dependence on the processor via a communication bus, the instructions when executed by the processor performing the method of processing the cache of any of claims 1 to 8.

11. A computer storage medium having stored thereon executable instructions that, when executed by one or more processors, perform the method of processing a cache of any one of claims 1 to 8.

Technical Field

The present application relates to a cache processing technology, and in particular, to a cache processing method, an electronic device, and a computer storage medium.

Background

At present, the use of complex heavyweight distributed caches increases the complexity of the system, and makes application scenarios of some simple caches become complex; the existing local cache has poor expansibility, and cannot simply expand a cache strategy according to business requirements; when local custom cache is used, most developers realize the MAP structure by using simple key values, and can not support scenes needing dynamic elimination; therefore, the existing cache processing method has the technical problem of single fixation.

Disclosure of Invention

The embodiment of the application provides a cache processing method, an electronic device and a computer storage medium, which can improve the flexibility of cache processing.

The technical scheme of the application is realized as follows:

the embodiment of the application provides a cache processing method, which comprises the following steps:

obtaining CRUD operation aiming at the cache to be operated;

obtaining an access rule generated for the cache data of the cache to be operated from a preset cache factory;

determining an operation position of the CRUD operation from the cache to be operated according to the access rule;

and processing the cache to be operated according to the CRUD operation and the operation position.

An embodiment of the present application provides an electronic device, including:

the first acquisition module is used for acquiring CRUD operation aiming at the cache to be operated;

the second obtaining module is used for obtaining an access rule generated for the cache data of the cache to be operated from a preset cache factory;

a determining module, configured to determine, according to the access rule, an operation location of the CRUD operation from the cache to be operated;

and the processing module is used for processing the cache to be operated according to the CRUD operation and the operation position.

An embodiment of the present application further provides an electronic device, including: the cache memory comprises a processor and a storage medium storing instructions executable by the processor, wherein the storage medium depends on the processor to execute operations through a communication bus, and when the instructions are executed by the processor, the cache memory processing method of one or more of the above embodiments is executed.

The embodiment of the application provides a computer storage medium, which stores executable instructions, and when the executable instructions are executed by one or more processors, the processors execute the cache processing method of one or more embodiments.

The embodiment of the application provides a cache processing method, electronic equipment and a computer storage medium, and the method comprises the following steps: obtaining CRUD operation aiming at the cache to be operated, obtaining an access rule generated for cache data of the cache to be operated from a preset cache factory, determining an operation position of the CRUD operation from the cache to be operated according to the access rule, and processing the cache to be operated according to the CRUD operation and the operation position; that is to say, in the embodiment of the present application, the cache factory generates the access rule of the cache data for the cache to be operated, and when the cache to be operated performs CRUD operation, the access rule generated for the cache data of the cache to be operated is obtained from the preset cache factory, so that the access rule of the cache data in the cache to be operated can be obtained, and thus the operation position of CRUD operation can be obtained from the cache to be operated, and thus, since the cache factory can generate the access rule of the cache data, the cache factory can generate the access rule for the cache data of the cache to be operated according to the actual requirement, which avoids the single access rule from being inapplicable, and the access rule of the cache data of the cache to be operated is more conveniently generated by the cache factory, which is beneficial to performing CRUD operation of the cache to be operated, thereby improving the flexibility of cache processing, and further the expansibility of cache processing is increased.

Drawings

Fig. 1 is a schematic flowchart of an optional cache processing method according to an embodiment of the present disclosure;

FIG. 2 is a block diagram illustrating an example of an alternative software toolkit provided by an embodiment of the present application;

fig. 3 is a schematic flowchart of an example of an optional cache processing method according to an embodiment of the present disclosure;

fig. 4 is a code screenshot corresponding to an optional delete operation provided in the embodiment of the present application;

fig. 5 is a first schematic structural diagram of an electronic device according to an embodiment of the present disclosure;

fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.

Example one

Fig. 1 is a schematic flow diagram of a processing method for a selectable cache provided in an embodiment of the present application, and referring to fig. 1, the processing method for a cache may include:

s101: obtaining CRUD operation aiming at the cache to be operated;

at present, caches are divided into three major categories, namely, distributed caches such as redis, and local caches such as cache, which implement caches by self-definition, wherein for the distributed caches, as the distributed caches are complex in configuration, Application Programming Interfaces (APIs) are numerous, so that a simple cache use scenario becomes very complex; for local cache, the local cache has poor expansibility, and the implementation of a custom cache strategy is difficult to complete; when the local custom cache is used, most developers are realized by using a simple key value pair structure, and can not support scenes needing dynamic elimination; therefore, the existing custom cache has poor flexibility and expansibility.

In order to improve flexibility and expansibility of cache processing, an embodiment of the application provides a cache processing method, and first, CRUD operation for a cache to be operated is obtained; the CRUD operation may include an insert operation (C), a query operation (R), a modify operation (U), and a delete operation (D), which is not specifically limited in this embodiment.

In order to obtain the CRUD operation to be cached, in an alternative embodiment, S101 may include:

and acquiring CRUD operation aiming at the cache to be operated through an interface of the CRUD operation.

Specifically, in the embodiment of the present application, for each operation in the CRUD operations of the to-be-operated cache, a corresponding interface is provided, for example, an insertion interface is provided for an insertion operation, an inquiry interface is provided for an inquiry operation, a modification interface is provided for a modification operation, and a deletion interface is provided for a deletion operation, so that the insertion operation for the to-be-operated cache can be obtained through the insertion interface, the inquiry operation for the to-be-operated cache can be obtained through the inquiry interface, the modification operation for the to-be-operated cache can be obtained through the modification interface, and the deletion operation for the to-be-operated cache can be obtained through the deletion interface.

It should be noted that, in the embodiment of the present application, other interfaces than the above-mentioned interface are also provided, and the corresponding functions are realized through the interfaces provided in the embodiment of the present application.

S102: obtaining an access rule generated for cache data of a cache to be operated from a preset cache factory;

specifically, for the cache to be operated, the cache factory generates an access rule for the cache data of the cache to be operated in advance, so that the cache to be operated processes the cache data according to the generated access rule.

Then, when receiving the CRUD operation of the to-be-operated cache, in order to implement the CRUD operation of the to-be-operated cache, an access rule of cache data of the to-be-operated cache needs to be obtained from a preset cache factory, so as to implement the CRUD operation of the to-be-operated cache according to the access rule, where the access rule may include: first-in first-out, first-in last-out, random in and out, in and out according to weight, and the like, which is not specifically limited in this embodiment of the present application.

Here, the access rule is generated for the cached data through the cache factory based on the factory mode, the limitation of a fixed single access rule on a caching scheme is avoided, and the access rule can be conveniently generated for the cache through the cache factory so as to improve the flexibility and the expansibility of the cache.

S103: determining an operation position of CRUD operation from a cache to be operated according to an access rule;

after the access rule of the cache data of the cache to be operated is obtained, for each CRUD operation, according to the access rule, an operation position may be determined, specifically, when the access rule is first in first out, taking an insertion operation as an example, in order to insert data carried by the insertion operation into the cache to be operated, an insertion position of the insertion data carried by the insertion operation, that is, an operation position may be determined according to the access rule, and if the cache to be operated is full, the most first-in cache data in the cache to be operated may be deleted.

Therefore, after the operation position is determined, the CRUD operation can be executed according to the operation position, namely, the operation position of the CRUD operation can be conveniently known on the basis of knowing the access rule, and the cache processing is facilitated.

S104: and processing the cache to be operated according to the CRUD operation and the operation position.

Specifically, for each of the CRUD operations, the way to process the cache to be operated is different, and in an alternative embodiment, S104 may include:

when the CRUD operation is an insertion operation for the insertion data, adding the insertion data to an operation position;

when CRUD operation is deletion operation, the cache data at the operation position is deleted.

That is, for an insert operation, the purpose of the insert operation is to insert data into a to-be-operated buffer, in order to insert the insert data into the to-be-operated buffer, it is first necessary to determine an insertion position of the insert data, that is, the above operation position, where the operation position may be determined according to an access rule, for example, in the case of a first-in first-out, the insert data currently required to be inserted needs to be placed at a position after the last insert data, so that the operation position of the insert operation may be determined, and the insert data may be inserted into the operation position in the to-be-operated buffer, so as to implement the first-in first-out access rule.

For the deletion operation, the purpose of the deletion operation is to delete the cache data in the cache to be operated, and in order to delete the cache data in the cache to be operated, an operation position of the cache data to be deleted, that is, the operation position, needs to be determined first, where the operation position can be determined according to an access rule, for example, taking first in first out as an example, the cache data currently needing to be deleted should be the cache data entering earliest, so that the operation position of the deletion operation can be determined, and the cache data entering earliest can be deleted from the operation position, so as to implement the first in first out access rule.

The cache processing method can realize the insertion and deletion of the cache data in the cache, and different access rules are generated by the cache factory, thereby being beneficial to realizing the processing modes of different caches.

In order to implement different processing on different caches, in an alternative embodiment, the method may further include:

determining the type of a cache to be operated;

determining an access rule corresponding to the type of the cache to be operated according to the corresponding relation between the type of the preset cache and the access rule;

and determining the access rule corresponding to the type of the cache to be operated as the access rule of the cache data of the cache to be operated.

Specifically, the type of the cache to be operated is determined, and the corresponding relationship between the type of the cache and the access rule is preset, that is, the corresponding access rule is set for different cache types, so that compared with the existing cache which is provided with a single fixed access rule, the flexibility of the cache is improved.

Here, corresponding entry and exit rules are set for different types of caches, then, when performing CRUD operation, processing manners of an insertion operation and a deletion operation are different according to different types of caches, taking first-in first-out as an example, for an insertion operation, inserting insertion data into a position after last insertion data, deleting cache data which enters first in the cache for a deletion operation, taking an entry and exit rule as an example of entry and exit according to a weight value of data, for insertion data, putting the weight value of the insertion data into the weight value of the cache data for sorting, determining a position after the cache data to which the weight value arranged before the weight value of the insertion data belongs in a sorting result as an insertion position of the insertion data, and similarly for a deletion operation, which is not described herein again.

In addition, in order to implement different processing on different caches, in an optional embodiment, the method may further include:

determining the type of a cache to be operated;

determining an elimination strategy corresponding to the type of the cache to be operated according to the corresponding relation between the preset type of the cache and the elimination strategy;

and determining the elimination strategy corresponding to the type of the cache to be operated as the elimination strategy of the cache data of the cache to be operated.

That is to say, a correspondence relationship between types of caches and elimination strategies is preset, that is, corresponding elimination strategies are set for different types of caches, and here, cache data in a cache can be deleted according to the elimination strategies, where the elimination strategies may be timing elimination strategies, random elimination strategies, or elimination strategies for migrating cache data, and here, this embodiment of the present application is not specifically limited to this; compared with the existing elimination strategy of which the cache is fixedly and singly arranged, the flexibility of the cache is improved.

Here, corresponding elimination policies are set for different types of caches, so that when CRUD operation is performed, the elimination policies are different according to the different types of caches, taking the example that the elimination policy is to empty the cache data in the cache every 30 minutes, emptying the cache data in the cache every 30 minutes, taking the example that the elimination policy is to delete the cache data in the weight cache every 30 minutes according to the weight value, deleting the cache data with the weight value lower than the preset threshold every 30 minutes.

Further, in order to implement different processing on different caches, in an optional embodiment, the method may further include:

determining the type of a cache to be operated;

determining an exception handling strategy corresponding to the type of the cache to be operated according to the corresponding relation between the type of the preset cache and the exception handling strategy;

and determining the exception handling strategy corresponding to the type of the cache to be operated as the exception handling strategy of the cache data of the cache to be operated.

That is to say, a correspondence between the type of the cache and the exception handling policy is preset, that is, corresponding exception handling policies are set for different types of the cache, where the cached cache data may be handled according to the exception handling policy, where the exception handling policy may be to prompt for a specific operation, where the specific operation may be an insertion operation for the specific data, a query operation for the specific data, or a modification operation for the specific data, and this is not specifically limited in this embodiment of the present application; compared with the existing exception handling strategy that the cache is fixedly and singly arranged, the flexibility of the cache is improved.

Here, corresponding exception handling policies are set for different types of caches, and then, when performing CRUD operation, the exception handling policies are different according to the different types of caches, the exception handling policies are taken as insertion operation for specific data, and when the insertion operation including the specific data in the operation information is captured by using the tangent plane technology, information for prompting the occurrence of exception is generated.

In order to determine the type of the cache to be operated, in an alternative embodiment, determining the type of the cache to be operated includes:

when the cache to be operated is used for storing data in a memory of a Central Processing Unit (CPU), determining the type of the cache to be operated as the CPU cache;

when the cache to be operated is used for storing data in the browser, determining the type of the cache to be operated as the browser cache;

when the cache to be operated is used for storing data in a Content Delivery Network (CDN), determining the type of the cache to be operated as the CDN cache;

and when the cache to be operated is used for storing the data in the database, determining the type of the cache to be operated as the database cache.

Specifically, the cache may be divided into a CPU cache, a browser cache, a CDN cache, a database cache, and the like, which is not specifically limited in this embodiment of the present application.

Here, the cache to be operated may be classified according to the use of the cache to be operated, and when the cache to be operated is used to store data in the memory of the CPU, it is determined that the type of the cache to be operated is the CPU cache, and the capacity of the CPU cache is much smaller than the memory, but the exchange speed is much faster than the memory; when the cache to be operated is used for storing data in the browser, determining that the type of the cache to be operated is the browser cache, wherein the browser cache can cache some static resources, such as pictures, js, css and the like, which are contents which are not changed frequently, so that the request is not required to be made every time; similarly, when the cache to be operated is used for storing data in the CDN, determining that the type of the cache to be operated is a CDN cache, where a client checks the cache of the browser first, and if the cache is expired, the client sends a request like the CDN, and if the CDN checks that the cache data is not expired, then directly returns a response, which only needs two steps to make a decision, but if the CDN cache is expired, it needs to initiate a request to the application server to obtain a new data response, and this part of new data is cached in the CDN according to a certain caching policy; when the cache to be operated is used for storing data in the database, it is determined that the type of the cache to be operated is a CDN cache, where a cache is added between the application server and the database because the data stored in the cache has a persistent characteristic, and then reading and writing have operations of disk Input Output (IO), and the reading and writing speed of the memory is much faster than that of the disk.

For a specific operation and a specific exception, in order to capture the specific operation and the specific exception, in an optional embodiment, the method may further include:

capturing operation information of a cache to be operated by using a tangent plane technology;

when the operation information comprises operation information of a specific operation, generating first prompt information; the first prompt information is used for prompting that a cache to be operated has a specific operation;

when the operation information contains information for indicating that a specific abnormality occurs, generating second prompt information; and the second prompt information is used for prompting that the cache to be operated has specific abnormity.

Specifically, for all operations occurring in the cache, snooping may be performed by using a tangent plane technique, and operation information of the cache to be operated is captured, where the operation information may include operation information of an insert operation, operation information of an inquiry operation, operation information of a modify operation, operation information of a delete operation, and the like, and this is not particularly limited in this embodiment of the present application.

In addition, an abnormal operation is likely to occur in the process of operating the cache, so here, it is necessary to determine whether the operation information includes operation information of a specific operation or information indicating that the specific abnormality occurs after capturing the operation information of the cache to be operated, and when the operation information includes the operation information of the specific operation, information for prompting that the specific operation occurs in the cache to be operated is generated, and is used for prompting that the specific operation occurs in the cache to be operated, for example, the specific operation is an insertion operation of inserting specific data, and when the operation information includes the operation information of the insertion operation of inserting the specific data, the first prompt information is generated.

When the operation information contains information for indicating that a specific exception occurs, generating information for prompting that the cache to be operated has the specific exception, for example, the specific exception is data in a clear cache, and when the operation information contains information for indicating that the cache is the data in the clear cache, generating second prompting information; in addition, when a specific exception occurs, for the type of the cache to be operated, an exception handling policy corresponding to the specific exception may be adopted to handle the specific exception.

The following describes a method for processing a cache in one or more embodiments described above by way of example.

Fig. 2 is a schematic structural diagram of an optional example of a software tool kit provided in an embodiment of the present application, as shown in fig. 2, the software tool kit related to the present example may be nested in any software system, and configuration work is completed through annotation in the system, and a specific technical architecture is shown in fig. 2 and includes: the system comprises a cache factory, a cache interface, cache implementation, a cache monitor, cache exception handling and unified log handling.

Wherein, the cache factory is: based on the factory model, all types of caches are generated from the factory, and the user can realize the instant trial of all caches only by injecting the caches into the factory.

A cache interface: by abstracting the common behavior of the cache into an interface mode, for example, behaviors such as inserting the cache, inquiring the cache, clearing the cache and the like, the difference characteristics of the cache are realized by specific cache types, such as different elimination strategies and different monitoring mechanisms.

The cache listener: by using a 'section' mechanism of the software system, specific behaviors occurring in the cache can be monitored, and personalized processing can be performed. If the special buffer is inserted, the alarm is sent out, the mail is sent, etc., which can be freely realized in the buffer monitor by the user, and the message queue is sent to the monitor processor.

Unified exception handling and unified log handling: configuring a uniform tangent plane for exception handling, capturing the exception occurring in the cache, and performing personalized processing aiming at specific exceptions, wherein the specific exceptions need to be prompted to a user if exception clearing fails; and configuring a uniform section for log processing, capturing a specific log generated in the cache, and performing personalized processing on the specific log.

Fig. 3 is a schematic flowchart of an example of an optional cache processing method according to an embodiment of the present application, and as shown in fig. 3, the cache processing method may include:

s301: acquiring a cache factory;

s302: obtaining a cache type from a cache factory;

the cache type is equivalent to the access rule of the cache data.

S303: performing CRUD operation on the cache;

s304: returning an operation result;

wherein, the above S301 to S304 are a normal cache processing flow, and if no exception occurs in the cache, the operation result is returned normally in the flow.

S305: capturing a specific operation in a cache;

s306: processing a specific operation;

s307: capturing a specific exception in a cache;

s308: a specific exception is handled.

Specifically, the cache monitor captures specific operations by using a tangent plane technology, captures specific exceptions by using the tangent plane technology, and can dynamically expand in the software system without influencing normal business flow, and is in a decoupling state with other modules of the software system and light in weight.

If any flow node generates an exception in the operation process, the exception is captured in the form of a tangent plane by the unified exception handling, and any exception which is not handled is not released. The technical treatment is carried out according to the following flow:

when the cache operation is carried out, all the exceptions during the CRUD operation are captured by using a tangent plane technology, an exception handling method is called when the exceptions are captured, and each exception is specifically handled, so that a user who has the exception can clearly know the reason and the solution of the exception.

The method in this example may be used in the following business scenarios:

first, basic usage scenarios

i. The back-end developer introduces the cache toolkit into the software system of the back-end developer.

Get to a particular type of cache by injection into the cache factory.

Performing a caching operation using the generated specific cache.

Second, high-order usage scenarios

i. The back-end developer introduces a caching tool into its own software system.

And ii, customizing a cache interface in the cache implementation tool by the cache, customizing a elimination strategy, a monitoring mode and the like.

Get custom type cache by injecting into cache factory.

Performing caching using the generated custom cache.

The specific use mode of the tool is as follows: injecting the class needing to use the cache into a cache factory; according to a specific service scene, a cache monitoring strategy is realized; and carrying out a series of caching operations by utilizing the generated cache.

Fig. 4 is a code screenshot corresponding to an optional delete operation according to an embodiment of the present application, and as shown in fig. 4, since the buffer capacity is only 3, when a fourth element is added, according to a First-in First-out (FIFO) rule, an object of a First house is to be removed.

By the above example, for a basic cache application scenario, a user can directly produce the built-in cache from a cache factory, the built-in cache can be used immediately, certain expansibility is achieved, the user can define a monitoring strategy, an exception handling mechanism and the like, and the use threshold is low; for a complex cache application scene, a user can customize a cache implementation class, realize a cache interface, complete implementation and use of specific cache in the system, and the expansion cost is low.

The embodiment of the application provides a cache processing method, which comprises the following steps: obtaining CRUD operation aiming at the cache to be operated, obtaining an access rule generated for cache data of the cache to be operated from a preset cache factory, determining an operation position of the CRUD operation from the cache to be operated according to the access rule, and processing the cache to be operated according to the CRUD operation and the operation position; that is to say, in the embodiment of the present application, the cache factory generates the access rule of the cache data for the cache to be operated, and when the cache to be operated performs CRUD operation, the access rule generated for the cache data of the cache to be operated is obtained from the preset cache factory, so that the access rule of the cache data in the cache to be operated can be obtained, and thus the operation position of CRUD operation can be obtained from the cache to be operated, and thus, since the cache factory can generate the access rule of the cache data, the cache factory can generate the access rule for the cache data of the cache to be operated according to the actual requirement, which avoids the single access rule from being inapplicable, and the access rule of the cache data of the cache to be operated is more conveniently generated by the cache factory, which is beneficial to performing CRUD operation of the cache to be operated, thereby improving the flexibility of cache processing, and further the expansibility of cache processing is increased.

Example two

Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in fig. 5, including: a first acquisition module 51, a second acquisition module 52, a determination module 53 and a processing module 54; wherein the content of the first and second substances,

a first obtaining module 51, configured to obtain a CRUD operation for the cache to be operated;

a second obtaining module 52, configured to obtain, from a preset cache factory, an access rule generated for cache data to be cached;

a determining module 53, configured to determine, according to the access rule, an operation location of CRUD operation from the cache to be operated;

and the processing module 54 is used for processing the cache to be operated according to the CRUD operation and the operation position.

Optionally, the first obtaining module 51 is specifically configured to:

and acquiring CRUD operation aiming at the cache to be operated through an interface of the CRUD operation.

Optionally, the electronic device is further configured to:

determining the type of a cache to be operated;

determining an access rule corresponding to the type of the cache to be operated according to the corresponding relation between the type of the preset cache and the access rule;

and determining the access rule corresponding to the type of the cache to be operated as the access rule of the cache data of the cache to be operated.

Optionally, the electronic device is further configured to:

determining the type of a cache to be operated;

determining an elimination strategy corresponding to the type of the cache to be operated according to the corresponding relation between the preset type of the cache and the elimination strategy;

and determining the access rule corresponding to the type of the cache to be operated as an elimination strategy of the cache data of the cache to be operated.

Optionally, the electronic device is further configured to:

determining the type of a cache to be operated;

determining an exception handling strategy corresponding to the type of the cache to be operated according to the corresponding relation between the type of the preset cache and the exception handling strategy;

and determining the exception handling strategy corresponding to the type of the cache to be operated as the exception handling strategy of the cache data of the cache to be operated.

Optionally, the determining, by the electronic device, the type of the cache to be operated includes:

when the cache to be operated is used for storing data in the memory of the CPU, determining the type of the cache to be operated as the CPU cache;

when the cache to be operated is used for storing data in the browser, determining the type of the cache to be operated as the browser cache;

when the cache to be operated is used for storing data in the CDN, determining the type of the cache to be operated as the CDN cache;

and when the cache to be operated is used for storing the data in the database, determining the type of the cache to be operated as the database cache.

Optionally, the processing module 54 is specifically configured to:

when the CRUD operation is an insertion operation for the insertion data, adding the insertion data to an operation position;

and when the CRUD operation is a deleting operation, deleting the cache data at the operation position.

In practical applications, the first obtaining module 51, the second obtaining module 52, the determining module 53 and the Processing module 54 may be implemented by a processor located on an electronic device, specifically, a Central Processing Unit (CPU), a Microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.

Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 6, an embodiment of the present application provides an electronic device 600, including:

a processor 61 and a storage medium 62 storing instructions executable by the processor 61, wherein the storage medium 62 depends on the processor 61 to perform operations through a communication bus 63, and when the instructions are executed by the processor 61, the cache processing method according to the first embodiment is performed.

It should be noted that, in practical applications, the various components in the terminal are coupled together by a communication bus 63. It will be appreciated that the communication bus 63 is used to enable communications among the components. The communication bus 63 includes a power bus, a control bus, and a status signal bus, in addition to a data bus. But for clarity of illustration the various buses are labeled in figure 6 as communication bus 63.

The embodiment of the present application provides a computer storage medium, which stores executable instructions, and when the executable instructions are executed by one or more processors, the processors execute the cache processing method according to the first embodiment.

The computer-readable storage medium may be a magnetic random access Memory (FRAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM), among others.

As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.

The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

The above description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于执行数据处理的装置、方法、和计算设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类