Cache design management method, device, equipment and computer readable storage medium

文档序号:649757 发布日期:2021-05-14 浏览:13次 中文

阅读说明:本技术 缓存设计管理方法、装置、设备和计算机可读存储介质 (Cache design management method, device, equipment and computer readable storage medium ) 是由 许晓强 于 2021-01-20 设计创作,主要内容包括:本申请涉及计算机网络领域,提供了缓存设计管理方法、装置、设备和计算机可读存储介质,以对出现问题的缓存精确定位,避免中心节点崩溃带来的系统整体坍塌。所述方法包括:基于选定的函数,将多个缓存中任意一个缓存的标识B-(ID)映射为目标应用进程的进程号;按照预设周期,获取根据目标应用进程的进程号创建至目标应用进程上各个缓存的在线状况;根据目标应用进程上各个缓存的在线状况,基于缓存淘汰算法对创建至目标应用进程上的缓存进行管理。本申请的技术方案一方面可以按照映射关系迅速定位到出现问题的缓存所在的应用进程,另一方面可以避免现有技术那样一个中心节点崩溃导致整个系统坍塌的风险。(The application relates to the field of computer networks, and provides a cache design management method, a cache design management device, cache design management equipment and a computer readable storage medium, so that a cache with problems is accurately positioned, and the integral collapse of a system caused by the collapse of a central node is avoided. The method comprises the following steps: based on the selected function, the identification B of any one of the plurality of caches is cached ID Mapping to a process number of a target application process; acquiring online conditions of each cache created to the target application process according to the process number of the target application process according to a preset period; and managing the caches established to the target application processes based on a cache elimination algorithm according to the online conditions of all the caches on the target application processes. According to the technical scheme, on one hand, the application process where the cache with problems is located can be quickly located according to the mapping relation, and on the other hand, the wind which is caused by the fact that a central node collapses and the whole system collapses in the prior art can be avoidedAnd (5) risking.)

1. A cache design management method, the method comprising:

based on the selected function, the identification B of any one of the plurality of caches is cachedIDMapping the cache to a process number of a target application process, wherein the target application process is an application process to which the any cache is to be established;

acquiring online conditions of each cache created to the target application process according to the process number of the target application process according to a preset period;

and managing the cache created to the target application process based on a cache elimination algorithm according to the online condition of each cache on the target application process.

2. The cache design management method according to claim 1, wherein the identifier B of any one of the plurality of caches is cached based on the selected functionIDThe process number mapped as the target application process includes:

determining an optimal hash function according to a conflict minimization principle;

with the target of any one cacheHu BIDUsing the optimal hash function to identify B as a key wordIDMapping to a hash value Hbid

Adopting a preset function to convert the Hash value H into a hash value HbidAnd mapping the process number of the target application process.

3. The cache design management method of claim 2, wherein the determining an optimal hash function according to a collision minimization principle comprises:

using current candidate hash function to identify BIDCarrying out Hash;

decoding a result of the current candidate hash function hash;

if the local base number is full when the decoding results are accumulated, replacing the current candidate hash function with the next candidate hash function to continue to carry out the identification BIDAnd hashing and decoding the hashed result, and selecting the candidate hash function corresponding to the situation that the situation does not occur the situation that the situation does not occur the situation that.

4. The cache design management method of claim 2, wherein the determining an optimal hash function according to a collision minimization principle comprises:

randomly selecting a hash function Hs from the candidate hash function set;

using the randomly selected hash function Hs to pair the identifier BIDCarrying out Hash operation to obtain a simplified key value;

if the simplified key value exists in the hash bucket, selecting a hash function from the candidate hash function set again for the identifier BIDPerforming hash operation until the simplified key value does not exist in the hash bucket;

will be paired with the identification BIDAnd carrying out hash operation, and selecting the corresponding hash function as the optimal hash function when the simplified key value does not exist in the hash bucket.

5. The cache design management method according to claim 1, wherein the obtaining, according to a preset period, the online status of each cache created to the target application process according to the process number of the target application process comprises:

and counting the online cache number on the target application process at intervals of the preset period.

6. The cache design management method according to claim 1, wherein the online status of each cache on the target application process includes whether the online cache number exceeds a first upper limit and a second upper limit that can be tolerated by the target application process, and the managing the cache created on the target application process based on a cache eviction algorithm according to the online status of each cache on the target application process includes:

determining whether the online cache number exceeds a first upper limit and a second upper limit which can be tolerated by the target application process;

if the online cache number exceeds the first upper limit and does not exceed the second upper limit, removing a first number of caches from the target application process based on the cache elimination algorithm;

and if the online cache quantity exceeds the second upper limit, rejecting a second quantity of caches from the target application process based on the cache elimination algorithm.

7. The cache design management method of claim 6, wherein the culling of the first or second number of caches from the target application process based on the cache eviction algorithm comprises:

counting access information of each cache on the target application process to obtain access statistical results of each cache;

determining that the access mode of each cache on the target application process is changed from a first access mode to a second access mode according to the access statistical result of each cache;

switching a first cache elimination algorithm corresponding to the first access mode into a second cache elimination algorithm corresponding to the second access mode;

and using the second cache elimination algorithm to eliminate the first quantity or the second quantity of caches from the target application process.

8. The cache design management method of claim 6, wherein before the culling of the first or second amount of cache from the target application process based on the cache eviction algorithm, the method further comprises:

classifying caches on the target application process as least recently used LRU caches and least frequently used LFU caches;

creating an LRU eliminated data list corresponding to the LRU cache and an LFU eliminated data list corresponding to the LFU cache;

when data with the access frequency larger than a frequency threshold exists in the LRU cache, transferring the data with the access frequency larger than the frequency threshold from the LRU cache to the LFU cache;

when the number of times of the hits of the LRU elimination data list reaches a first hit threshold, increasing the capacity of the LRU cache and reducing the capacity of the LFU cache; and

and when the number of times of the hits of the LFU elimination data list reaches a second hit threshold, increasing the capacity of the LFU cache and reducing the capacity of the LRU cache.

9. An apparatus for cache design management, the apparatus comprising:

a mapping module for caching the identifier B of any one of the plurality of caches based on the selected functionIDMapping the cache to a process number of a target application process, wherein the target application process is an application process to which the any cache is to be established;

the acquisition module is used for acquiring online conditions of all caches established on the target application process according to the process number of the target application process according to a preset period;

and the cache management module is used for managing the caches established on the target application process based on a cache elimination algorithm according to the online status of each cache on the target application process.

10. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the cache design management method according to any one of claims 1 to 8 when executing the computer program.

11. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the cache design management method according to any one of claims 1 to 8.

Technical Field

The present invention relates to the field of computer networks, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for managing a cache design.

Background

In designing a server of a business system such as a game or an app, in order to realize an increasing concurrency requirement, the server needs to realize a distributed system using multiple processes. The processes of the multi-process server may be deployed on different machines to provide high performance, highly available and scalable application services. In a large concurrent server, a plurality of caches are often used as intermediate layers between a plurality of application processes and a database, so as to relieve the reading and writing pressure of the database.

For a multi-process server, the existing cache design management method is mainly to create a plurality of caches in different application processes at random, and manage the application processes or the caches on the application processes through a central node. When accessing the cache, the central node is accessed first to obtain the cache position, and then the cache is operated in the corresponding application process.

However, the existing cache design management method described above will cause a problem in a certain cache, and it is difficult to locate the cache, and when the central node management is adopted, the availability is low, that is, once a problem occurs, the whole system is affected.

Disclosure of Invention

The application provides a cache design management method, a cache design management device, cache design management equipment and a computer readable storage medium, so that a cache with problems is accurately positioned, and the integral collapse of a system caused by the crash of a central node is avoided.

In one aspect, the present application provides a cache design management method, including:

based on the selected function, the identification B of any one of the plurality of caches is cachedIDMapping the cache to a process number of a target application process, wherein the target application process is an application process to which the any cache is to be established;

acquiring online conditions of each cache created to the target application process according to the process number of the target application process according to a preset period;

and managing the cache created to the target application process based on a cache elimination algorithm according to the online condition of each cache on the target application process.

Optionally, the identifier B of any one of the plurality of caches is cached based on the selected functionIDThe process number mapped as the target application process includes: determining an optimal hash function according to a conflict minimization principle; with the identification B of any cacheIDFor a key, using the optimal hash function to hash the keyIdentification BIDMapping to a hash value Hbid(ii) a Adopting a preset function to convert the Hash value H into a hash value HbidAnd mapping the process number of the target application process.

Optionally, the determining an optimal hash function according to a collision minimization principle includes: using current candidate hash function to identify BIDCarrying out Hash; decoding a result of the current candidate hash function hash; if the local base number is full when the decoding results are accumulated, replacing the current candidate hash function with the next candidate hash function to continue to carry out the identification BIDAnd hashing and decoding the hashed result, and selecting the candidate hash function corresponding to the situation that the situation does not occur the situation that the situation does not occur the situation that.

Optionally, the determining an optimal hash function according to a collision minimization principle includes: randomly selecting a hash function Hs from the candidate hash function set; carrying out hash operation on the identification BID by using the randomly selected hash function Hs to obtain a simplified key value; if the simplified key value exists in the hash bucket, selecting a hash function from the candidate hash function set again for the identifier BIDPerforming hash operation until the simplified key value does not exist in the hash bucket; will be paired with the identification BIDAnd carrying out hash operation, and selecting the corresponding hash function as the optimal hash function when the simplified key value does not exist in the hash bucket.

Optionally, the obtaining, according to a preset period, an online status of each cache created to the target application process according to the process number of the target application process includes: and counting the online cache number on the target application process at intervals of the preset period.

Optionally, the online status of each cache on the target application process includes whether the online cache number exceeds a first upper limit and a second upper limit that can be tolerated by the target application process, and the managing, according to the online status of each cache on the target application process, the cache created on the target application process based on a cache elimination algorithm includes: determining whether the online cache number exceeds a first upper limit and a second upper limit which can be tolerated by the target application process; if the online cache number exceeds the first upper limit and does not exceed the second upper limit, removing a first number of caches from the target application process based on the cache elimination algorithm; and if the online cache quantity exceeds the second upper limit, rejecting a second quantity of caches from the target application process based on the cache elimination algorithm.

Optionally, the removing the first or second number of caches from the target application process based on the cache eviction algorithm includes: counting access information of each cache on the target application process to obtain access statistical results of each cache; determining that the access mode of each cache on the target application process is changed from a first access mode to a second access mode according to the access statistical result of each cache; switching a first cache elimination algorithm corresponding to the first access mode into a second cache elimination algorithm corresponding to the second access mode; and using the second cache elimination algorithm to eliminate the first quantity or the second quantity of caches from the target application process.

Optionally, before the removing the first or second number of caches from the target application process based on the cache eviction algorithm, the method further includes: classifying caches on the target application process as least recently used LRU caches and least frequently used LFU caches; creating an LRU eliminated data list corresponding to the LRU cache and an LFU eliminated data list corresponding to the LFU cache; when data with the access frequency larger than a frequency threshold exists in the LRU cache, transferring the data with the access frequency larger than the frequency threshold from the LRU cache to the LFU cache; when the number of times of the hits of the LRU elimination data list reaches a first hit threshold, increasing the capacity of the LRU cache and reducing the capacity of the LFU cache; and when the number of hits of the LFU elimination data list reaches a second hit threshold, increasing the capacity of the LFU cache and decreasing the capacity of the LRU cache.

In another aspect, the present application provides a cache design management apparatus, including:

a mapping module for caching the identifier B of any one of the plurality of caches based on the selected functionIDMapping the cache to a process number of a target application process, wherein the target application process is an application process to which the any cache is to be established;

the acquisition module is used for acquiring online conditions of all caches established on the target application process according to the process number of the target application process according to a preset period;

and the cache management module is used for managing the caches established on the target application process based on a cache elimination algorithm according to the online status of each cache on the target application process.

In a third aspect, the present application provides a computer device, where the computer device includes a memory and a processor, where the memory stores a computer program, and the processor executes the steps in the cache design management method according to any one of the above embodiments by calling the computer program stored in the memory.

In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, where the computer program is suitable for being loaded by a processor to execute the steps in the cache design management method according to any one of the above embodiments.

As can be seen from the technical solutions provided in the foregoing application, on one hand, since the cache is mapped to the process number of the target application process according to the identifier thereof, and is created on the target application process corresponding to the mapped process number, once a problem occurs in the cache, the application process where the cache is located can be quickly located according to the mapping relationship; on the other hand, the cache can be created on any application process mapped according to the selected function, each application process is in an independent and equal relationship, that is, compared with the central node management manner in the prior art, the technical scheme of the present application is equivalent to decentralization, and therefore, even if a problem occurs in one cache and/or the application process in which the cache is located, only the cache and/or the application process in which the cache is located is affected, and other caches and/or application processes are not affected, so that the risk that the whole system collapses due to the crash of one central node in the prior art can be avoided.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.

Fig. 1 is a flowchart of a cache design management method according to an embodiment of the present application;

fig. 2 is a schematic diagram that maps identifiers of 6 caches to 3 process numbers according to an embodiment of the present application;

FIG. 3 is a block diagram illustrating an identifier B for caching any one of a plurality of caches based on a selected function according to an embodiment of the present disclosureIDMapping as a process number of the target application process;

fig. 4 is a schematic structural diagram of a cache design management apparatus according to an embodiment of the present application;

fig. 5 is a schematic structural diagram of a cache design management apparatus according to another embodiment of the present application;

fig. 6 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

In this specification, adjectives such as first and second may only be used to distinguish one element or action from another, without necessarily requiring or implying any actual such relationship or order. References to an element or component or step (etc.) should not be construed as limited to only one of the element, component, or step, but rather to one or more of the element, component, or step, etc., where the context permits.

In the present specification, the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.

The present application provides a cache design management method, as shown in fig. 1, which mainly includes steps S101 to S103, detailed as follows:

step S101: based on the selected function, the identification B of any one of the plurality of caches is cachedIDAnd mapping the process number to the target application process, wherein the target application process caches the application process to be created for the any one.

In the embodiment of the present application, the identifier of the cache is used to uniquely determine the cache, the identifier of the cache may be a name, a number, and the like of the cache, and the process number of the target application process is also similar to the identifier of the cache and is used to uniquely determine the target application process. The target application process is the mark B of any one cache in the multiple caches according to the mapping relation of the selected functionIDThe application process that maps to and has the arbitrary one cache created on it. As an embodiment of the present application, the identifier B of any one of the plurality of caches is cached based on a selected functionIDThe process number mapped as the target application process can be realized through steps S1021 to S1023, which is described as follows:

step S1021: and determining the optimal hash function according to the principle of minimizing hash collision.

The hash function is a function used in a hash algorithm (also referred to as a hashing method, a keyword address calculation method, or the like), and a corresponding table is referred to as a hash table. The basic idea of the hash algorithm is to establish a corresponding relationship h between a keyword K of an element and a position P of the element, so that P ═ h (K), where h isA hash function. When creating the hash table, directly storing the element of the key word K into the unit with the address h (K), and when searching the element of the key word K, calculating the storage position P (h (K)) of the element by using the hash function. When the set of keys is large, different elements of the keys may map to the same address of the hash table, i.e., K1Is not equal to K2When, h (K)1)=h(K2) This phenomenon is called hash collision. In practical applications, hash collisions are difficult to avoid, and the probability of such collisions is generally reduced by improving the performance of the hash function. In the embodiment of the application, the optimal hash function is determined according to the principle of minimizing hash collision so as to reduce the probability of hash collision.

Specifically, in an embodiment of the present application, according to the principle of minimizing hash collisions, determining the optimal hash function may be: identification B using current candidate hash function pairIDCarrying out Hash; decoding the hash result of the current candidate hash function; if the local base number is full when the decoding results are accumulated, replacing the current candidate hash function by the next candidate hash function to continue to carry out the identification BIDAnd hashing and decoding the hashed result, and selecting a candidate hash function corresponding to the situation that the situation does not occur the situation that the.

In the above embodiment, the candidate hash functions are screened, and the candidate hash functions are used to identify any cache BIDAnd carrying out hashing, and decoding the hashed result. If the hash result is the same, the full sum of the base numbers of the current bit inevitably occurs when the decoded result is accumulated (for example, for binary numbers, "2" is the base number, when two "1" s are added on a certain bit, the full sum of the base numbers of the current bit occurs, and for decimal numbers, "10" is the base number, when two numbers are added on a certain bit, the full sum of the base numbers of the current bit occurs when the sum is 10), therefore, if the full sum of the base numbers of the current bit occurs in the accumulation of the decoded result, it can be determined that the collision occurs when the candidate hash function is used for hashing, otherwise, if the full sum of the base numbers of the current bit does not occur in the accumulation of the decoded resultIf the number is full, it can be determined that no collision occurs when the candidate hash function is used for hashing, and the candidate hash function can be selected for subsequent operations, so that the hash collision can be avoided by using the method for selecting the hash function provided by the embodiment of the application. For example, there are two cached identifications: the identifier 1 of the cache is 0x0110, the identifier 2 of the cache is 0x0011, and the first candidate hash function is H1=h1(k) 0x1000 × k, where the symbol "x" represents the phase and, the first candidate hash function indicates that the key k is to be and' ed using the binary number 0x 1000. When using the first candidate hash function H1When hash is performed on the cached identifier 1, that is, 0x0110, and the cached identifier 2, that is, 0x0011, the obtained results are all "0", after decoding is performed on "0" from 3 bits to 8 bits, that is, 3/8 decoding is performed, the results are all "00000001", and when the two decoded results are accumulated, the base number of the current bit will be full, it can be determined that the first candidate hash function will generate hash collision for the final hash function, and therefore, the next candidate hash function should be continuously used for hash. If the next candidate hash function is H2=h2(k) The candidate hash function indicates that the key k is to be and-operated using a binary number 0x0x0001, specifically, when the candidate hash function H is used2When hash is performed on the cached id 1, that is, 0x0110, and the cached id 2, that is, 0x0011, respectively, the obtained results are "0" and "1", respectively, and after decoding is performed on 3 bits to 8 bits, the results are "00000001" and "00000010", respectively. Because "1" is on different bit, no base fullness of the current bit will occur when "00000001" and "00000010" are accumulated, at this time, it can be determined that "0 x 0010" is used as the hash function and no conflict will occur, and "candidate hash function H" can be selected2And performing subsequent operation as an optimal hash function.

In another embodiment of the present application, according to the principle of minimizing hash collision, determining the optimal hash function may be: randomly selecting a hash function Hs from the candidate hash function set; identification B of any cache by using randomly selected hash function HsIDPerforming Hash operation to obtain simplified relationshipA key value; if the simplified key value exists in the hash bucket, selecting a hash function pair mark B from the candidate hash function set againIDPerforming hash operation until no hash function pair mark B selected from the candidate hash function set exists in the hash bucketIDCarrying out Hash operation until obtaining a simplified key value; will identify BIDAnd carrying out hash operation, and selecting the corresponding hash function as the optimal hash function when the simplified key value does not exist in the hash bucket. In this embodiment, for different hash buckets, different hash functions may be used to generate the reduced key value, so that the hash function required to generate the reduced key value may select an optimal hash function from a plurality of hash functions, thereby reducing the probability of hash collision.

Step S1022: with any cached identification BIDFor the key word, identify B using an optimal hash functionIDMapping to a hash value Hbid

Assuming that the determined optimal hash function is represented by Ho in step S1021, the identifier B of any cache is usedIDFor the key word, identify B using an optimal hash functionIDMapping to a hash value HbidIs Hbid=Ho(BID)。

Step S1023: using a predetermined function to convert the hash value H into a hash valuebidMapped to the process number of the target application process.

In this embodiment of the application, using the optimal hash function determined in step 1023, any cached identifier B may be identifiedIDThe mapping is an integer. As for the preset function, it may be for the hash value HbidAnd (4) taking the remainder, wherein the remainder result is the process number of the target application process, namely, the creation identifier B on the target application process corresponding to the process number is identifiedIDTo cache.

It should be noted that although the optimal hash function identifies B to any one cacheIDHash value H obtained in Hash cloud computingbidIs unique, however, with a preset function, a different hash value HbidPossibly mapped to the same process number, i.e. using a preset functionFor different hash values HbidThe results of the complementation may be the same, which means that different caches may be created on the same application process, that is, different caches correspond to the same target application process, as shown in fig. 2, which is a schematic diagram of mapping identifiers of 6 caches (denoted by cache identifier 1, cache identifier 2, …, and cache identifier 6 in the figure) to process numbers of 3 target application processes, where cache identifier 1 and cache identifier 4 are mapped to process number P1, cache identifier 2 and cache identifier 5 are mapped to process number P2, and cache identifier 3 and cache identifier 6 are mapped to process number P3.

FIG. 3 is a diagram of caching an identification B of any one of a plurality of caches based on a selected functionIDThe process number mapping method mainly comprises the steps of firstly mapping identifiers of n caches (represented by cache identifier 1, cache identifier 2, cache identifier …, and cache identifier n in the figure) into corresponding n hash values through an optimal hash function, and then mapping the n hash values through a preset function, so that the identifier caches of the n caches are finally mapped into the process numbers of m target application processes.

Step S102: and according to a preset period, acquiring the online status of each cache created to the target application process according to the process number of the target application process.

Specifically, after each application process runs for a preset period, the online cache number on the target application process created by any one cache may be counted at intervals of the preset period.

Step S103: and managing the caches established to the target application processes based on a cache elimination algorithm according to the online conditions of all the caches on the target application processes.

In the embodiment of the present application, the online status of each cache on the target application process includes whether the online cache number exceeds a first upper limit and a second upper limit that can be tolerated by the target application process. The maximum tolerable number of cache lines of a target application process can be estimated by means of pressure testing and the like, the number is used as a second upper limit which can be tolerated by the target application process, the first upper limit can be smaller than the second upper limit, or the first upper limit is a certain smaller proportion of the second upper limit, for example, the tolerable number of cache lines of a target application process, that is, the second upper limit is set to be 50 ten thousand, the first upper limit is set to be 10 thousand, and the like.

As mentioned above, different cache identities may map to the same process number, i.e., multiple different caches may be created on the same target application process; on the other hand, different business scenarios may have different requirements for caches, for example, on an application process, some caches may be used less recently, other caches may be used more recently, or some caches may be used least, other caches may be used most, and so on. Therefore, a mechanism is needed to manage these caches, that is, the caches created on the target application process can be managed based on a cache elimination algorithm according to the online status of each cache on the target application process. Specifically, as an embodiment of the present application, according to an online status of each cache on the target application process, managing the cache created on the target application process based on the cache elimination algorithm may be implemented through steps S1031 to S1033, which are described as follows:

step S1031: it is determined whether the amount of cache that the target application process is online exceeds a first upper limit and a second upper limit that the target application process can tolerate.

For example, if the number of online caches in the target application process is large, more online caches need to be removed from the target application process, and conversely, if the number of online caches in the target application process is small, fewer online caches need to be removed from the target application process. Therefore, when managing the cache created to the target application process based on the cache elimination algorithm, it is first required to determine whether the online cache number of the target application process exceeds a first upper limit and a second upper limit that can be tolerated by the target application process.

Step S1032: and if the online cache number of the target application process exceeds a first upper limit and does not exceed a second upper limit, rejecting the first number of caches from the target application process based on a cache elimination algorithm.

For example, for the above example, the first upper limit is set to 10 ten thousand, the second upper limit is set to 50 ten thousand, and if the online cache number of the target application process exceeds 10 ten thousand and does not exceed 50 ten thousand, the first number, for example, 1000 caches, are removed from the target application process based on the cache elimination algorithm.

Step S1033: and if the online cache number of the target application process exceeds a second upper limit, rejecting a second number of caches from the target application process based on a cache elimination algorithm.

If the number of caches of the target application process on line exceeds a second upper limit, a second number of caches may be removed from the target application process due to too many caches on line, where the second number is generally larger than the first number, for example, the second number here may be equal to the number of caches of the target application process on line minus the second upper limit, and then an additional number y is added, that is, if the number of caches of the target application process on line is represented by C and the second upper limit is represented by L, the second number is C-L + y.

Considering that existing cache elimination algorithms, such as Least Recently Used (LRU) elimination algorithm, Least Recently Used (LFU) elimination algorithm, and Most Recently Used (MRU) elimination algorithm, have certain limitations, for example, when the LRU elimination algorithm is periodically accessed, a hot spot may be eliminated in advance due to time locality, when the access mode is changed, the LFU elimination algorithm needs longer time to apply a new mode, and the MRU elimination algorithm is only suitable for sequential access scenes, etc., in the embodiment of the present application, based on the cache elimination algorithm, the elimination of a first or second number of caches from a target application process can be implemented through steps S '1031 to S' 1034, which is described as follows:

step S' 1031: and counting access information of each cache on the target application process to obtain an access statistical result of each cache.

In this embodiment, the access information of each cache in the target application process includes an access type, an access frequency, an access time, and the like for each cache in the target application process.

Step S' 1032: and determining that the access mode of each cache on the target application process is changed from the first access mode to the second access mode according to the access statistical result of each cache.

Step S' 1033: and switching the first cache elimination algorithm corresponding to the first access mode into a second cache elimination algorithm corresponding to the second access mode.

Specifically, for example, when the access mode of each cache on the target application process is changed from the first access mode to the second access mode, and the access mode is changed from the random access mode to the sequential access mode, the switching from the first cache eviction algorithm corresponding to the first access mode to the second cache eviction algorithm corresponding to the second access mode may be: switching the least recently used LRU elimination algorithm into the most recently used MRU elimination algorithm; for another example, when the access mode of each cache on the target application process is changed from the first access mode to the second access mode, and the cluster access mode is changed to the sequential access mode, the switching from the first cache eviction algorithm corresponding to the first access mode to the second cache eviction algorithm corresponding to the second access mode may be: switching the LFU elimination algorithm which is used least frequently recently into the MRU elimination algorithm which is used most recently; for another example, when the access mode of each cache on the target application process is changed from the first access mode to the second access mode, and the other access mode is changed to the sequential access mode, the switching from the first cache eviction algorithm corresponding to the first access mode to the second cache eviction algorithm corresponding to the second access mode may be: and switching the adaptive cache ARC elimination algorithm to the most recently used MRU elimination algorithm, and the like.

Step S' 1034: and using a second cache eviction algorithm to evict the first or second amount of cache from the target application process.

In the embodiment, the access mode of each cache on the target application process is determined to be changed from the first access mode to the second access mode according to the access statistical result of each cache, so that the appropriate cache elimination algorithm is matched according to the access mode of each cache on the target application process, scene limitation is further eliminated, and the operating efficiency of the caches is ensured.

In this embodiment of the present application, based on a cache elimination algorithm, before the first number or the second number of caches are removed from the target application process, the cache hit rate may be further improved and the cache effect is optimized through the following steps S401 to S405, which are described as follows:

step S401: the caches on the target application process are classified as least recently used LRU cache and least frequently used LFU cache.

Since the least recently used LRU elimination algorithm and the least frequently used LFU elimination algorithm are the two most common cache elimination algorithms, the caches on the target application process can be classified into the least recently used LRU cache and the least frequently used LFU cache, and different cache management strategies are adopted for the two types of caches.

Step S402: an LRU eviction data list corresponding to the LRU cache and an LFU eviction data list corresponding to the LFU cache are created.

In the embodiment of the present application, the LRU eliminated data list corresponding to the LRU cache and the LFU eliminated data list corresponding to the LFU cache are respectively used to record the index of the data eliminated from the LRU cache and the index of the data eliminated from the LFU cache, so that the capacity of the LRU cache and the LFU cache can be adjusted according to the condition of the eligibility of the eliminated data in the subsequent process.

Step S403: and when the data with the access frequency larger than the frequency threshold exists in the LRU cache, transferring the data with the access frequency larger than the frequency threshold from the LRU cache to the LFU cache.

Even if the LRU cache exists, some data with a high access frequency exists, and the LRU cache is directly removed as a whole, which is not the best cache management policy.

Step S404: and when the number of times of the LRU elimination data list reaches a first targeting threshold, increasing the capacity of the LRU cache and reducing the capacity of the LFU cache.

When the number of times of hits of the LRU eliminated data list reaches the first hit threshold, it indicates that the access demand of the service scenario to the LRU cache is greater than the access demand of the LFU cache.

Step S405: and when the number of times of the hits of the LFU elimination data list reaches a second hit threshold value, increasing the capacity of the LFU cache and reducing the capacity of the LRU cache.

In contrast to the foregoing embodiment of step S404, when the number of hits of the LFU elimination data list reaches the second hit threshold, it indicates that the traffic scenario has a larger access requirement to the LFU cache relative to the access requirement to the LRU cache, and at this time, the LFU cache capacity should be increased and the LRU cache capacity should be decreased.

As can be seen from the above-mentioned cache design management method illustrated in fig. 1, on one hand, since the cache is mapped to the process number of the target application process according to the identifier thereof, and is created on the target application process corresponding to the mapped process number, once the cache has a problem, the application process where the cache is located can be quickly located according to the mapping relationship; on the other hand, the cache can be created on any application process mapped according to the selected function, each application process is in an independent and equal relationship, that is, compared with the central node management manner in the prior art, the technical scheme of the present application is equivalent to decentralization, and therefore, even if a problem occurs in one cache and/or the application process in which the cache is located, only the cache and/or the application process in which the cache is located is affected, and other caches and/or application processes are not affected, so that the risk that the whole system collapses due to the crash of one central node in the prior art can be avoided.

Referring to fig. 4, a cache design management apparatus provided in this embodiment of the present application may include a mapping module 401, an obtaining module 402, and a cache management module 403, which are detailed as follows:

a mapping module 401 for mapping, based on the selected function,mark B for caching any one of multiple cachesIDMapping the process number into a process number of a target application process, wherein the target application process is any application process to be created in a cache;

an obtaining module 402, configured to obtain, according to a preset period, an online status of each cache created on a target application process according to a process number of the target application process;

the cache management module 403 is configured to manage, based on a cache elimination algorithm, the caches created in the target application processes according to online statuses of the caches in the target application processes.

Optionally, in the apparatus illustrated in fig. 4, the mapping module 401 may include a hash function determining unit, a first mapping unit, and a second mapping unit, where:

the hash function determining unit is used for determining an optimal hash function according to a conflict minimization principle;

a first mapping unit for caching the identifier B with any oneIDFor the key word, identify B using an optimal hash functionIDMapping to a hash value Hbid

A second mapping unit for mapping the hash value H by using a predetermined functionbidMapped to the process number of the target application process.

Optionally, the hash function determination unit may include a first hash unit, a decoding unit, and a first selection unit, where:

a first hash unit for using the current candidate hash function to identify BIDCarrying out Hash;

a decoding unit, configured to decode a hash result of the current candidate hash function;

a first selection unit, configured to, if the local base number is full when the decoding results are accumulated, replace the current candidate hash function with the next candidate hash function to continue to perform the operation on any cached identifier BIDHash and decoding the Hash result, and selecting the position where the position does not have the full sum of the position base number until the decoded result is accumulatedThe candidate hash function should be used as the optimal hash function.

Optionally, the hash function determining unit may include a second selecting unit, a second hash unit, a third hash unit, and a third selecting unit, where:

a second selection unit, configured to arbitrarily select one hash function Hs from the candidate hash function set;

a second hash unit for using the randomly selected hash function Hs to identify B of any one cacheIDCarrying out Hash operation to obtain a simplified key value;

a third hash unit, configured to, if there is a reduced key value in the hash bucket, select again a hash function from the candidate hash function set to identify B of any one cacheIDCarrying out Hash operation until no simplified key value exists in the Hash barrel;

a third selection unit for selecting the identifier B of any one cacheIDAnd carrying out hash operation, and selecting the corresponding hash function as the optimal hash function when the simplified key value does not exist in the hash bucket.

Optionally, in the apparatus illustrated in fig. 4, the obtaining module 402 is specifically configured to count the number of online caches in the target application process at preset intervals.

Optionally, in the apparatus illustrated in fig. 4, the online status of each cache on the target application process includes whether the online cache number of the target application process exceeds a first upper limit and a second upper limit that can be tolerated by the target application process, and the cache management module 403 may include an upper limit determining unit, a first cache culling unit, and a second cache culling unit, where:

the upper limit determining unit is used for determining whether the online cache number of the target application process exceeds a first upper limit and a second upper limit which can be tolerated by the target application process;

the first cache removing unit is used for removing the first number of caches from the target application process based on a cache elimination algorithm if the online cache number of the target application process exceeds a first upper limit and does not exceed a second upper limit;

and the second cache removing unit is used for removing the second number of caches from the target application process based on a cache elimination algorithm if the online cache number of the target application process exceeds a second upper limit.

Optionally, the first cache rejecting unit or the second cache rejecting unit may include a counting unit, an access mode determining unit, a switching unit, and a third cache rejecting unit, where:

the statistical unit is used for counting the access information of each cache on the target application process and acquiring the access statistical result of each cache;

the access mode determining unit is used for determining that the access mode of each cache on the target application process is changed from a first access mode to a second access mode according to the access statistical result of each cache;

the switching unit is used for switching a first cache elimination algorithm corresponding to the first access mode into a second cache elimination algorithm corresponding to the second access mode;

and the third cache removing unit is used for removing the first quantity or the second quantity of caches from the target application process by using the second cache removing algorithm.

Optionally, the apparatus illustrated in fig. 4 may further include a classifying module 501, a list creating module 502, a data transferring module 503, a first capacity increasing module 504, and a second capacity increasing module 505, as shown in fig. 5, the apparatus for cache design management provided in another embodiment of the present application, wherein:

a classifying module 501, configured to classify caches in the target application process into a least recently used LRU cache and a least frequently used LFU cache before the first cache removing unit or the second cache removing unit removes the first amount or the second amount of caches from the target application process based on a cache elimination algorithm;

a list creation module 502, configured to create an LRU eliminated data list corresponding to the LRU cache and an LFU eliminated data list corresponding to the LFU cache;

a data transfer module 503, configured to transfer, when data with an access frequency greater than a frequency threshold exists in the LRU cache, the data with an access frequency greater than the frequency threshold from the LRU cache to the LFU cache;

a first capacity increasing module 504, configured to increase a capacity of the LRU cache and decrease the capacity of the LFU cache when the number of times of hits of the LRU eliminated data list reaches a first hit threshold;

a second capacity increasing module 505, configured to increase the capacity of the LFU cache and decrease the capacity of the LRU cache when the number of hits of the LFU elimination data list reaches a second hit threshold.

As can be seen from the description of the above technical solutions, on one hand, since the cache is mapped to the process number of the target application process according to the identifier thereof, and is created on the target application process corresponding to the mapped process number, once the cache has a problem, the application process where the cache is located can be quickly located according to the mapping relationship; on the other hand, the cache can be created on any application process mapped according to the selected function, each application process is in an independent and equal relationship, that is, compared with the central node management manner in the prior art, the technical scheme of the present application is equivalent to decentralization, and therefore, even if a problem occurs in one cache and/or the application process in which the cache is located, only the cache and/or the application process in which the cache is located is affected, and other caches and/or application processes are not affected, so that the risk that the whole system collapses due to the crash of one central node in the prior art can be avoided.

Fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 6, the computer device 6 of this embodiment mainly includes: a processor 60, a memory 61, and a computer program 62, such as a program for a cache design management method, stored in the memory 61 and operable on the processor 60. The processor 60, when executing the computer program 62, implements the steps in the above-described embodiment of the cache design management method, such as the steps S101 to S103 shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, implements the functions of the modules/units in the above-described apparatus embodiments, such as the functions of the mapping module 401, the obtaining module 402, and the cache management module 403 shown in fig. 4.

Illustratively, the computer program 62 for the cache design management method comprisesComprises the following steps: based on the selected function, the identification B of any one of the plurality of caches is cachedIDMapping the process number into a process number of a target application process, wherein the target application process is any application process to be created in a cache; acquiring online conditions of each cache created to the target application process according to the process number of the target application process according to a preset period; and managing the caches established to the target application processes based on a cache elimination algorithm according to the online conditions of all the caches on the target application processes. The computer program 62 may be partitioned into one or more modules/units, which are stored in the memory 61 and executed by the processor 60 to accomplish the present application. One or more of the modules/units may be a series of computer program instruction segments capable of performing specific functions that describe the execution of the computer program 62 in the computer device 6. For example, the computer program 62 may be divided into functions of the mapping module 401, the obtaining module 402, and the cache management module 403 (modules in the virtual device), and specific functions of each module are as follows: a mapping module 401, configured to map an identifier B of any one of the plurality of caches based on the selected functionIDMapping the process number into a process number of a target application process, wherein the target application process is any application process to be created in a cache; an obtaining module 402, configured to obtain, according to a preset period, an online status of each cache created on a target application process according to a process number of the target application process; the cache management module 403 is configured to manage, based on a cache elimination algorithm, the caches created in the target application processes according to online statuses of the caches in the target application processes.

The computer device 6 may include, but is not limited to, a processor 60, a memory 61. Those skilled in the art will appreciate that fig. 6 is merely an example of a computing device 6 and is not intended to limit the computing device 6 and may include more or fewer components than shown, or some of the components may be combined, or different components, e.g., the computing device may also include an input-output computing device, a network access computing device, a bus, etc.

The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

The storage 61 may be an internal storage unit of the computer device 6, such as a hard disk or a memory of the computer device 6. The memory 61 may also be an external storage computer device of the computer device 6, such as a plug-in hard disk provided on the computer device 6, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 61 may also include both an internal storage unit of the computer device 6 and an external storage computer device. The memory 61 is used for storing computer programs and other programs and data required by the computer device. The memory 61 may also be used to temporarily store data that has been output or is to be output.

It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as required to different functional units and modules, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the above-mentioned apparatus may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

In the embodiments provided in the present application, it should be understood that the disclosed apparatus/computer device and method may be implemented in other ways. For example, the above-described apparatus/computer device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a non-transitory computer readable storage medium. Based on such understanding, all or part of the processes in the method according to the embodiments of the present application may also be implemented by a computer program to instruct related hardware, where the computer program of the cache design management method may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the embodiments of the method, that is, based on a selected function, the identifier B of any one of the caches is identifiedIDMapping the process number into a process number of a target application process, wherein the target application process is any application process to be created in a cache; acquiring online conditions of each cache created to the target application process according to the process number of the target application process according to a preset period; and managing the caches established to the target application processes based on a cache elimination algorithm according to the online conditions of all the caches on the target application processes. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The non-transitory computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the non-transitory computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, non-transitory computer readable media does not include electrical carrier signals and telecommunications signals as subject to legislation and patent practice. The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; go toThe present application is described in detail with reference to the foregoing examples, and those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

The above-mentioned embodiments, objects, technical solutions and advantages of the present application are described in further detail, it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present application should be included in the scope of the present invention.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于数据加密的双输出串口装置及其通讯方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类