LRU cache implementation method and device, computer readable storage medium and equipment

文档序号:1042647 发布日期:2020-10-09 浏览:33次 中文

阅读说明:本技术 Lru缓存的实现方法、装置、计算机可读存储介质及设备 (LRU cache implementation method and device, computer readable storage medium and equipment ) 是由 鲁宝宏 邓丹 刘洋 于 2019-03-27 设计创作,主要内容包括:本公开涉及数据处理技术领域,具体而言,涉及一种LRU缓存的实现方法、LRU缓存的实现装置,以及实现所述LRU缓存的实现方法的计算机可读存储介质及电子设备。其中,上述LRU缓存的实现方法包括:确定目标缓存数据;响应于缓存指令,基于第一TBB容器对所述目标缓存数据进行所述缓存指令对应的缓存处理;基于第二TBB容器记录对所述第一TBB容器中LRU缓存数据的访问历史。本技术方案采用并行容器实现LRU缓存,能够提供丰富的线程安全接口,进而满足对LRU缓存数据的各种操作,有利于提高LRU缓存性能。同时,采用采用无锁设计声纹LRU缓存机制,有利于提高缓存应用程序的并发度,满足多线程并发的使用场景的需要。(The present disclosure relates to the field of data processing technologies, and in particular, to a method and an apparatus for implementing an LRU cache, and a computer-readable storage medium and an electronic device for implementing the method for implementing the LRU cache. The implementation method of the LRU cache includes: determining target cache data; responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container; recording the access history of the LRU cache data in the first TBB container based on the second TBB container. The technical scheme adopts the parallel containers to realize the LRU cache, can provide rich thread safety interfaces, further meets various operations on LRU cache data, and is favorable for improving the LRU cache performance. Meanwhile, a voiceprint LRU caching mechanism is designed without lock, so that the concurrency of the caching application program is improved, and the requirement of a multi-thread concurrent use scene is met.)

1. A method for implementing an LRU cache, comprising:

determining target cache data;

responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container;

recording the access history of the LRU cache data in the first TBB container based on the second TBB container.

2. An LRU cache implementation according to claim 1,

each LRU cache data stored in the first TBB container corresponds to a node object in a pre-allocation queue;

and a first access record queue is stored in the second TBB container and contains pointers of a plurality of node objects.

3. An LRU cache implementation according to claim 2, wherein,

responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:

in response to a write instruction for writing the target cache data into the first TBB container, inserting the target cache data into the first TBB container, and adding the target cache data in a first node object in the pre-allocation queue through an atomic operation;

recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:

and adding the pointer of the first node object to the tail part of the first access record queue in the second TBB container so as to record the history data of accessing the LRU cache data.

4. An LRU cache implementation according to claim 2, wherein,

responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:

in response to an update instruction for updating the first TBB container by using the target cache data, pointing a pointer of a second node object corresponding to LRU cache data to be updated to a hole memory;

adding the target cache data in a third node object in the pre-distribution queue through atomic operation, and updating the LRU cache data to be updated by using the target cache data;

recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:

and adding a pointer of the third node object to the tail part of the first access record queue in the second TBB container so as to record historical data of accessing the LRU cache data.

5. An LRU cache implementation according to claim 2, wherein,

responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:

responding to a reading instruction for reading the target cache data from the first TBB container, and pointing a pointer of a fourth node object corresponding to the target cache data to a hole memory;

adding the target cache data in a fifth node object in the pre-distribution queue through atomic operation;

recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:

and adding a pointer of the fifth node object to the tail part of the first access record queue in the second TBB container so as to record historical data of accessing the LRU cache data.

6. An LRU cache implementation according to claim 2, wherein,

responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:

in response to a deletion instruction for deleting the target cache data from the first TBB container, pointing a pointer of a sixth node object corresponding to the target cache data to a hole memory;

deleting the target cache data from the first TBB container;

recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:

and keeping the pointer of the sixth node object unchanged at the position of the first access record queue in the second TBB container.

7. A method for implementing an LRU cache according to any of claims 2 to 6, wherein the method further comprises:

acquiring the current capacity value of the second TBB container, and judging whether the current capacity value of the second TBB container exceeds a second preset threshold value or not;

if the current capacity value of the second TBB container exceeds a second preset threshold value, then:

judging whether the ith pointer of the first access record queue points to the cavity memory or not in the direction from the head of the first access record queue to the tail of the first access record queue;

if the ith pointer points to the void memory, deleting the ith pointer to compress the first access record queue, and judging whether the (i + 1) th pointer points to the void memory;

if the (i + 1) th pointer does not point to the void memory, the (i + 1) th pointer is transferred to the tail part of a second access record queue in the second TBB container, and whether the (i + 2) th pointer points to the void memory is judged.

8. An LRU cache implementation according to claim 7,

determining target cache data, comprising:

determining that the target cache data is data corresponding to a jth pointer in the second access record queue;

responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:

in response to an update instruction for updating the first TBB container by using the target cache data, pointing a jth pointer to a hole memory;

adding the target cache data in a seventh node object in the pre-distribution queue through atomic operation, and updating the LRU cache data to be updated by using the target cache data;

recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:

adding the pointer of the seventh node object to the tail of the first access record queue in the second TBB container so as to record the history data of accessing the LRU cache data.

9. An LRU cache implementation according to claim 7,

determining target cache data, comprising:

determining that the target cache data is data corresponding to a jth pointer in the second access record queue;

responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:

responding to a reading instruction for reading the target cache data from the first TBB container, and pointing a j-th pointer to the hole memory;

adding the target cache data in an eighth node object in the pre-distribution queue through atomic operation;

recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:

and adding the pointer of the eighth node object to the tail part of the first access record queue in the second TBB container so as to record the history data of accessing the LRU cache data.

10. An LRU cache implementation according to claim 7,

determining target cache data, comprising:

determining that the target cache data is data corresponding to a jth pointer in the second access record queue;

responding to a cache instruction, and performing cache processing corresponding to the cache instruction on the target cache data based on a first TBB container, wherein the cache processing comprises the following steps:

in response to a delete instruction for deleting the target cache data from the first TBB container, pointing a jth pointer to a hole memory;

deleting the target cache data from the first TBB container;

recording access history to LRU cache data in the first TBB container based on a second TBB container, comprising:

and in the second TBB container, keeping the position of the j-th pointer in the second access record queue unchanged.

11. A method of implementing an LRU cache according to any of claims 8 to 10, wherein the method further comprises:

acquiring a current capacity value of the first TBB container, and judging whether the current capacity value of the first TBB container exceeds a first preset threshold value or not;

if the current capacity value of the first TBB container exceeds a first preset threshold value, then:

traversing from the head of the second access record queue to the tail of the first access record queue, and judging whether the mth pointer of the second access record queue points to the void memory;

if the mth pointer points to the void memory, judging whether the (m + 1) th pointer points to the void memory;

if the (m + 1) th pointer does not point to the hole memory, determining and deleting first LRU cache data to be deleted in the first TBB container, wherein the first LRU cache data to be deleted is the same as the node object corresponding to the (m + 1) th pointer.

12. A method of implementing an LRU cache according to any of claims 8 to 10, wherein the method further comprises:

acquiring the current length value of the second access record queue, and judging whether the current length value of the second access record queue exceeds a third preset threshold value;

if the current length value of the second access record queue exceeds a third preset threshold, then:

and traversing the second access record queue, and deleting the pointer pointing to the void memory in the second access record queue.

13. An LRU cache implementation method according to claim 12, wherein the method further comprises:

if none of the pointers in the second access record queue point to the void memory, then:

sequentially deleting the nth pointer from the head of the second access record queue to the tail of the first access record queue until the current length value of the second access record queue is less than or equal to the third preset threshold, wherein n is a positive integer;

and determining and deleting second LRU cache data to be deleted in the first TBB container, wherein the second LRU cache data to be deleted is the same as the node object corresponding to the nth pointer.

14. An apparatus for implementing an LRU cache, comprising:

the determining module is used for determining target cache data;

the processing module is used for responding to a cache instruction and carrying out cache processing corresponding to the cache instruction on the target cache data based on a first TBB container;

and the recording module is used for recording the access history of the LRU cache data in the first TBB container based on the second TBB container.

15. A computer-readable storage medium on which a computer program is stored, the program implementing the method of implementing the LRU cache according to any one of claims 1 to 13 when executed by a processor.

16. An electronic device, comprising:

one or more processors;

a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the LRU cache implementing method of any one of claims 1 to 13.

37页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:以共享页表实施独特页表权限

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类