Data block management method, system and storage medium

文档序号:748839 发布日期:2021-04-23 浏览:11次 中文

阅读说明:本技术 一种数据块的管理方法、系统及存储介质 (Data block management method, system and storage medium ) 是由 徐佳宏 李银 李威青 刘彬 于 2019-10-23 设计创作,主要内容包括:本申请公开了一种数据块的管理方法、系统及存储介质,其中,所述数据块的管理方法通过数据块ID查询队列、LRU管理队列以及管理节点的配合使用,有效提高了数据块的查找效率;并且通过跳表的方式遍历或使用所述LRU管理队列和/或所述数据块ID查询队列,避免了使用数组的方式导致的数据移动的性能问题,提高了数据处理效率。(The application discloses a management method, a system and a storage medium of a data block, wherein the management method of the data block effectively improves the searching efficiency of the data block by the matching use of a data block ID query queue, an LRU management queue and a management node; and traversing or using the LRU management queue and/or the data block ID query queue in a table-skipping manner, thereby avoiding the performance problem of data movement caused by using an array manner and improving the data processing efficiency.)

1. A management method of a data block is applied to a shared memory, the shared memory comprises a management area and a data area, and the management method of the data block comprises the following steps:

dividing the data area into a plurality of data blocks, wherein the size of each data block is a preset size;

after the shared memory is mapped to the current process, the first address of the shared memory in the current process is obtained, and the difference value between the first address of the data block and the first address of the shared memory is used as the offset address of the data block;

establishing a corresponding LRU management queue and a data block ID query queue for each physical disk, wherein the data block ID query queue comprises IDs of data blocks which are arranged in order, and the LRU management queue uses a priority elimination algorithm;

establishing a management node for the data block, the management node comprising a plurality of first pointers, the first pointers pointing to adjacent nodes in the LRU management queue or to adjacent nodes in the data block ID query queue or to address pointers for data blocks in the data area or to IDs characterizing data blocks in the physical disk or to characterize priorities in the LRU management queue; the address of the first pointer is an offset address based on a first address of the shared memory, and the address pointer of the data block is used for representing the offset address of the data block;

establishing a manager for each of the physical disks, the manager having stored therein a plurality of second pointers, the second pointers pointing to a first or last node of the LRU management queue or pointing to a first or last node of the data block ID query queue or characterizing a number of nodes in the LRU management queue or characterizing a number of nodes in the data block ID query queue or characterizing an ID of the physical disk;

and traversing or using the data block ID to query the queue in a skip list mode according to the management node and the manager.

2. The method of claim 1, wherein the first pointer is an LRU _ prev pointer pointing to a previous node of the LRU management queue, or is a LRU _ next pointer pointing to a next node of the LRU management queue, or is a bid _ prev pointer pointing to a previous node of the data block ID query queue, or is a bid _ next pointer pointing to a next node of the data block ID query queue, or is an address pointer characterizing an offset address of a data block of a data region in the shared memory, or is a blockid field characterizing an ID of a data block in the physical disk, or is a priv field characterizing a priority in the LRU management queue;

the address pointer points to an address other than the first address of the shared memory.

3. The method of claim 2, wherein the second pointer is an LRU _ head pointer pointing to a first node of the LRU management queue, or is a LRU _ tail pointer pointing to a last node of the LRU management queue, or is a bid _ head pointer pointing to a first node of a data chunk ID query queue, or is a bid _ tail pointer pointing to a last node of a data chunk ID query queue, or is a LRU _ size field characterizing a number of nodes in the LRU management queue, or is a bid _ size field characterizing a number of nodes in the data chunk ID management queue, or is a did field characterizing an ID of the physical disk.

4. The method of claim 3, wherein traversing the data block ID query queue in a skip list manner according to the management node and the manager comprises:

acquiring a bid-head pointer through the manager, and acquiring an offset address of a first node of a data block ID query queue through the bid-head pointer;

calculating the sum of the first address of the shared memory and the offset address of the first node of the data block ID query queue to obtain the address of the first node of the data block ID query queue based on the address space of the current process;

sequentially acquiring a bid _ next pointer and a bid _ prev pointer of a node of the data block ID query queue according to the management node;

and acquiring offset addresses of the adjacent nodes of the nodes according to the acquired bid _ next pointer and bid _ prev pointer of the nodes, and calculating the sum of the acquired offset addresses and the first address of the shared memory to acquire the address of the first adjacent node of the data block ID query queue based on the address space of the current process.

5. The method of claim 3, wherein the LRU management queue using a priority eviction algorithm comprises:

obtaining LRU _ head pointer through the manager and obtaining an offset address of a first node of the LRU management queue through the LRU _ head pointer;

calculating the sum of the first address of the shared memory and the offset address of the first node of the LRU management queue to obtain the address of the first node of the LRU management queue based on the address space of the current process;

sequentially acquiring LRU _ next pointers and LRU _ prev pointers of nodes of the LRU management queue according to the management nodes;

and acquiring offset addresses of the adjacent nodes of the nodes according to the acquired LRU _ next pointer and LRU _ prev pointer of the nodes, and calculating the sum of the acquired offset addresses and the first address of the shared memory to acquire the address of the adjacent node of the first node of the LRU management queue based on the address space of the current process.

6. The method of claim 3, wherein traversing the block ID query queue in a skip list fashion according to the management node and the manager when the order of the nodes in the LRU management queue needs to be adjusted comprises:

inquiring a data block ID inquiry queue where the data block ID is located in a table skipping mode through the data block ID;

determining a target node through the data block ID query queue;

obtaining lru _ next pointer and lru _ prev pointer of the target node;

after connecting the node pointed by the LRU _ next pointer and the node pointed by the LRU _ prev pointer, the target node is placed at the position of the first node of the LRU management queue to realize the adjustment of the sequence of the nodes in the LRU management queue.

7. The method according to claim 3, wherein when a data block corresponding to a target node needs to be accessed, the querying the queue using the data block ID in a skip list manner according to the management node and the manager comprises:

acquiring a data address pointer corresponding to the target node in the data block ID query queue;

and calculating the sum of the offset address represented by the data address pointer and the first address of the shared memory, and accessing the data block corresponding to the target node according to the calculation result.

8. The method as claimed in claim 3, wherein when the data in the physical disk needs to be stored in the shared memory, said traversing or querying the LRU management queue and the data block ID queue in a skip list manner according to the management node and the manager comprises:

selecting a node with a priv field value of 0 as a storage node at the end of the LRU management queue, if the node with the priv field value of 0 does not exist at the end of the LRU management queue, judging whether the value after the priv field value-1 of the node at the end of the LRU management queue is 0, if not, placing the node at the end of the LRU management queue at the position of the first node of the LRU management queue, and performing a new round of elimination process again until a node with the priv field value of 0 is found;

after the storage node is determined, taking the storage node out of the LRU management queue, and simultaneously taking LRU _ size field as the value-1, if the storage node exists in the data block ID query queue at the same time, taking the storage node out of the data block ID query queue, and taking the data block ID query queue as the value-1 of bid _ size field;

determining the first address of a data block pointed by a storage node through an address pointer of the storage node;

storing the data in the physical disk into the determined data block according to the first address of the determined data block;

after the data storage is finished, putting the ID of the data block into a blockid field of a storage node;

storing the storage node in the data block ID query queue according to the ID of the data block, and adding the value +1 of the bid _ size field;

the storage node is placed where the first node of the LRU management queue is located, and the value of the bid _ size field is + 1.

9. The method as claimed in claim 3, wherein said traversing or querying the LRU management queue and the data block ID queue in a skip list manner according to the management node and the manager when a target data in the shared memory needs to be read comprises:

determining a data block ID in a physical disk corresponding to the target data according to the target data, and reading the offset of the data in the data block and the size of the target data;

acquiring a target node corresponding to the determined data block ID in the data block ID query queue through the determined data block ID;

acquiring a data block corresponding to the target node through an address pointer of the target node, wherein the first address of the data block is obtained by the sum of the first address of the shared memory and the offset address represented by the address pointer;

obtaining the determined first address of the target data in the data block according to the sum of the first address of the data block and the offset inside the data block;

and reading the target data according to the first address of the target data.

10. A management system for data blocks is applied to a shared memory, the shared memory comprises a management area and a data area, and the management system for the data blocks comprises:

the data block dividing module is used for dividing the data area into a plurality of data blocks, and the size of each data block is a preset size;

an address determining module, configured to obtain a first address of the shared memory in a current process after the shared memory is mapped to the current process, and use a difference between the first address of the data block and the first address of the shared memory as an offset address of the data block;

the system comprises a queue creating module, a queue selecting module and a queue selecting module, wherein the queue creating module is used for creating a corresponding LRU management queue and a data block ID query queue for each physical disk, the data block ID query queue comprises the IDs of data blocks which are arranged in order, and the LRU management queue uses a priority elimination algorithm;

a management node module, configured to establish a management node for the data block, where the management node includes a plurality of first pointers, and the first pointers point to adjacent nodes in the LRU management queue or point to adjacent nodes in the data block ID query queue or are address pointers of data blocks in the data area or represent IDs of data blocks in the physical disk or represent priorities in the LRU management queue; the address of the first pointer is an offset address based on a first address of the shared memory, and the address pointer of the data block is used for representing the offset address of the data block;

a manager module, configured to establish a manager for each physical disk, where a plurality of second pointers are stored, where the second pointers point to a first node or a last node of the LRU management queue or point to a first node or a last node of the data block ID query queue or characterize the number of nodes in the LRU management queue or characterize the number of nodes in the data block ID query queue or characterize the ID of the physical disk;

and the queue using module is used for traversing or using the data block ID to query the queue in a skip list mode according to the management node and the manager.

11. A storage medium, characterized in that the storage medium stores a program which executes the data block management method according to any one of claims 1 to 9 when triggered.

Technical Field

The present application relates to the field of computer application technologies, and in particular, to a method, a system, and a storage medium for managing data blocks.

Background

On one server, all disks need to use a large shared buffer area, and the shared buffer area is used for caching data, so that the times of copying data by a user are reduced when the user reads the data, and the data processing efficiency of the server is improved.

And the service corresponding to each disk provides a process for management. Then shared memory is needed to manage when we need to manage the cache. And the memory allocation in the shared memory needs to dynamically allocate the resources of the shared memory according to the heat of the disk, and the hotter the disk is, the more the allocated shared memory is used as the memory of the data cache.

When the shared memory is Used, the shared memory needs to be divided into a management area and a data area, the data area is divided into a plurality of data blocks, the management area uses an LRU (Least Recently Used) queue for each disk, the priority of the data blocks in the data area is recorded in the LRU management queue, and when one data block needs to be eliminated, the data block with the lowest priority in the LRU management queue is selected to be eliminated.

However, in the actual use process, the problems that the searching efficiency of the data nodes is low and the data processing efficiency is low exist in the existing management method for the data blocks of the shared memory.

Disclosure of Invention

In order to solve the above technical problem, the present application provides a method, a system and a storage medium for managing data blocks, so as to achieve the purpose of improving the search efficiency and the data processing efficiency of the method for managing data blocks of a shared memory.

In order to achieve the technical purpose, the embodiment of the application provides the following technical scheme:

a management method of a data block is applied to a shared memory, the shared memory comprises a management area and a data area, and the management method of the data block comprises the following steps:

dividing the data area into a plurality of data blocks, wherein the size of each data block is a preset size;

after the shared memory is mapped to the current process, the first address of the shared memory in the current process is obtained, and the difference value between the first address of the data block and the first address of the shared memory is used as the offset address of the data block;

establishing a corresponding LRU management queue and a data block ID query queue for each physical disk, wherein the data block ID query queue comprises IDs of data blocks which are arranged in order, and the LRU management queue uses a priority elimination algorithm;

establishing a management node for the data block, the management node comprising a plurality of first pointers, the first pointers pointing to adjacent nodes in the LRU management queue or to adjacent nodes in the data block ID query queue or to address pointers for data blocks in the data area or to IDs characterizing data blocks in the physical disk or to characterize priorities in the LRU management queue; the address of the first pointer is an offset address based on a first address of the shared memory, and the address pointer of the data block is used for representing the offset address of the data block;

establishing a manager for each of the physical disks, the manager having stored therein a plurality of second pointers, the second pointers pointing to a first or last node of the LRU management queue or pointing to a first or last node of the data block ID query queue or characterizing a number of nodes in the LRU management queue or characterizing a number of nodes in the data block ID query queue or characterizing an ID of the physical disk;

and traversing or using the data block ID to query the queue in a skip list mode according to the management node and the manager.

Optionally, the first pointer is an LRU _ prev pointer pointing to a previous node of the LRU management queue, or is a LRU _ next pointer pointing to a next node of the LRU management queue, or is a bid _ prev pointer pointing to a previous node of the data block ID query queue, or is a bid _ next pointer pointing to a next node of the data block ID query queue, or is an address pointer representing an offset address of a data block of a data area in the shared memory, or is a blob ID field representing an ID of a data block in the physical disk, or is a priv field representing a priority in the LRU management queue;

the address pointer points to an address other than the first address of the shared memory.

Optionally, the second pointer is an LRU _ head pointer pointing to the first node of the LRU management queue, or is a LRU _ tail pointer pointing to the last node of the LRU management queue, or is a bid _ head pointer pointing to the first node of the data block ID query queue, or is a bid _ tail pointer pointing to the last node of the data block ID query queue, or is a LRU _ size field representing the number of nodes in the LRU management queue, or is a bid _ size field representing the number of nodes in the data block ID query queue, or is a did field representing the ID of the physical disk.

Optionally, traversing the data block ID query queue in a skip list manner according to the management node and the manager includes:

acquiring a bid-head pointer through the manager, and acquiring an offset address of a first node of a data block ID query queue through the bid-head pointer;

calculating the sum of the first address of the shared memory and the offset address of the first node of the data block ID query queue to obtain the address of the first node of the data block ID query queue based on the address space of the current process;

sequentially acquiring a bid _ next pointer and a bid _ prev pointer of a node of the data block ID query queue according to the management node;

and acquiring offset addresses of the adjacent nodes of the nodes according to the acquired bid _ next pointer and bid _ prev pointer of the nodes, and calculating the sum of the acquired offset addresses and the first address of the shared memory to acquire the address of the first adjacent node of the data block ID query queue based on the address space of the current process.

Optionally, the LRU management queue using priority elimination algorithm includes:

obtaining LRU _ head pointer through the manager and obtaining an offset address of a first node of the LRU management queue through the LRU _ head pointer;

calculating the sum of the first address of the shared memory and the offset address of the first node of the LRU management queue to obtain the address of the first node of the LRU management queue based on the address space of the current process;

sequentially acquiring LRU _ next pointers and LRU _ prev pointers of nodes of the LRU management queue according to the management nodes;

and acquiring offset addresses of the adjacent nodes of the nodes according to the acquired LRU _ next pointer and LRU _ prev pointer of the nodes, and calculating the sum of the acquired offset addresses and the first address of the shared memory to acquire the address of the adjacent node of the first node of the LRU management queue based on the address space of the current process.

Optionally, when the order of the nodes in the LRU management queue needs to be adjusted, the traversing the data block ID query queue in a table-skipping manner according to the management node and the manager includes:

inquiring a data block ID inquiry queue where the data block ID is located in a table skipping mode through the data block ID;

determining a target node through the data block ID query queue;

obtaining lru _ next pointer and lru _ prev pointer of the target node;

after connecting the node pointed by the LRU _ next pointer and the node pointed by the LRU _ prev pointer, the target node is placed at the position of the first node of the LRU management queue to realize the adjustment of the sequence of the nodes in the LRU management queue.

Optionally, when a data block corresponding to a target node needs to be accessed, the querying, according to the management node and the manager, the queue using the data block ID in a skip list manner includes:

acquiring a data address pointer corresponding to the target node in the data block ID query queue;

and calculating the sum of the offset address represented by the data address pointer and the first address of the shared memory, and accessing the data block corresponding to the target node according to the calculation result.

Optionally, when data in the physical disk needs to be stored in the shared memory, the traversing or querying the queue using the LRU management queue and the data block ID in a table-skipping manner according to the management node and the manager includes:

selecting a node with a priv field value of 0 as a storage node at the end of the LRU management queue, if the node with the priv field value of 0 does not exist at the end of the LRU management queue, judging whether the value after the priv field value-1 of the node at the end of the LRU management queue is 0, if not, placing the node at the end of the LRU management queue at the position of the first node of the LRU management queue, and performing a new round of elimination process again until a node with the priv field value of 0 is found;

after the storage node is determined, taking the storage node out of the LRU management queue, and simultaneously taking LRU _ size field as the value-1, if the storage node exists in the data block ID query queue at the same time, taking the storage node out of the data block ID query queue, and taking the data block ID query queue as the value-1 of bid _ size field;

determining the first address of a data block pointed by a storage node through an address pointer of the storage node;

storing the data in the physical disk into the determined data block according to the first address of the determined data block;

after the data storage is finished, putting the ID of the data block into a blockid field of a storage node;

storing the storage node in the data block ID query queue according to the ID of the data block, and adding the value +1 of the bid _ size field;

the storage node is placed where the first node of the LRU management queue is located, and the value of the bid _ size field is + 1.

Optionally, when target data in the shared memory needs to be read, the traversing or querying the queue using the LRU management queue and the data block ID in a table-skipping manner according to the management node and the manager includes:

determining a data block ID in a physical disk corresponding to the target data according to the target data, and reading the offset of the data in the data block and the size of the target data;

acquiring a target node corresponding to the determined data block ID in the data block ID query queue through the determined data block ID;

acquiring a data block corresponding to the target node through an address pointer of the target node, wherein the first address of the data block is obtained by the sum of the first address of the shared memory and the offset address represented by the address pointer;

obtaining the determined first address of the target data in the data block according to the sum of the first address of the data block and the offset inside the data block;

and reading the target data according to the first address of the target data.

A management system of a data block is applied to a shared memory, the shared memory comprises a management area and a data area, and the management system of the data block comprises:

the data block dividing module is used for dividing the data area into a plurality of data blocks, and the size of each data block is a preset size;

an address determining module, configured to obtain a first address of the shared memory in a current process after the shared memory is mapped to the current process, and use a difference between the first address of the data block and the first address of the shared memory as an offset address of the data block;

the system comprises a queue creating module, a queue selecting module and a queue selecting module, wherein the queue creating module is used for creating a corresponding LRU management queue and a data block ID query queue for each physical disk, the data block ID query queue comprises the IDs of data blocks which are arranged in order, and the LRU management queue uses a priority elimination algorithm;

a management node module, configured to establish a management node for the data block, where the management node includes a plurality of first pointers, and the first pointers point to adjacent nodes in the LRU management queue or point to adjacent nodes in the data block ID query queue or are address pointers of data blocks in the data area or represent IDs of data blocks in the physical disk or represent priorities in the LRU management queue; the address of the first pointer is an offset address based on a first address of the shared memory, and the address pointer of the data block is used for representing the offset address of the data block;

a manager module, configured to establish a manager for each physical disk, where a plurality of second pointers are stored, where the second pointers point to a first node or a last node of the LRU management queue or point to a first node or a last node of the data block ID query queue or characterize the number of nodes in the LRU management queue or characterize the number of nodes in the data block ID query queue or characterize the ID of the physical disk;

and the queue using module is used for traversing or using the data block ID to query the queue in a skip list mode according to the management node and the manager.

A storage medium having a program stored therein, the program executing the method for managing a data block according to any one of the above-described methods when triggered.

It can be seen from the foregoing technical solutions that the embodiments of the present application provide a method, a system, and a storage medium for managing a data block, where the method for managing a data block effectively improves the efficiency of searching for a data block by using a data block ID query queue, an LRU management queue, and a management node in a coordinated manner; and traversing or using the LRU management queue and/or the data block ID query queue in a table-skipping manner, thereby avoiding the performance problem of data movement caused by using an array manner and improving the data processing efficiency.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.

Fig. 1 is a flowchart illustrating a method for managing a data block according to an embodiment of the present application.

Detailed Description

As described in the background art, the management method for data blocks of a shared memory in the prior art has the problems of low efficiency of searching data nodes and low efficiency of data processing.

Specifically, in the prior art, the shared memory is divided into a management area and a data area. The management area uses the LRU management queue for each disk, the LRU management queue is realized by using an array mode, the array mode is a mode of inserting data in the middle of the data, the data after the data which needs to be inserted currently is moved backwards firstly in the data inserting process, and after the data is moved, new data is inserted into the current position, so the reason of low operation efficiency is that the node data in the array is moved all the time. Meanwhile, if the number of nodes is large, the number of data of the operated array is large. Excessive data movement operations affect the performance of the operations. For example, if the ID of the lookup is at the end of the queue, the entire array is involved in the move, while the node is placed in front of the array. If the number of groups is large and the operation is frequent, the data processing efficiency is reduced.

In view of this, an embodiment of the present application provides a method for managing a data block, which is applied to a shared memory, where the shared memory includes a management area and a data area, and the method for managing the data block includes:

dividing the data area into a plurality of data blocks, wherein the size of each data block is a preset size;

after the shared memory is mapped to the current process, the first address of the shared memory in the current process is obtained, and the difference value between the first address of the data block and the first address of the shared memory is used as the offset address of the data block;

establishing a corresponding LRU management queue and a data block ID query queue for each physical disk, wherein the data block ID query queue comprises IDs of data blocks which are arranged in order, and the LRU management queue uses a priority elimination algorithm;

establishing a management node for the data block, the management node comprising a plurality of first pointers, the first pointers pointing to adjacent nodes in the LRU management queue or to adjacent nodes in the data block ID query queue or to address pointers for data blocks in the data area or to IDs characterizing data blocks in the physical disk or to characterize priorities in the LRU management queue; the address of the first pointer is an offset address based on a first address of the shared memory, and the address pointer of the data block is used for representing the offset address of the data block;

establishing a manager for each of the physical disks, the manager having stored therein a plurality of second pointers, the second pointers pointing to a first or last node of the LRU management queue or pointing to a first or last node of the data block ID query queue or characterizing a number of nodes in the LRU management queue or characterizing a number of nodes in the data block ID query queue or characterizing an ID of the physical disk;

and traversing or using the data block ID to query the queue in a skip list mode according to the management node and the manager.

The data block management method effectively improves the searching efficiency of the data block by the matching use of the data block ID query queue, the LRU management queue and the management node; and traversing or using the LRU management queue and/or the data block ID query queue in a table-skipping manner, thereby avoiding the performance problem of data movement caused by using an array manner and improving the data processing efficiency.

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

An embodiment of the present application provides a method for managing a data block, which is applied to a shared memory as shown in fig. 1, where the shared memory includes a management area and a data area, and the method for managing the data block includes:

s101: dividing the data area into a plurality of data blocks, wherein the size of each data block is a preset size;

for example, assuming a predetermined size of 4M (megabyte), the data area is divided into a plurality of data blocks of 4M size.

S102: after the shared memory is mapped to the current process, the first address of the shared memory in the current process is obtained, and the difference value between the first address of the data block and the first address of the shared memory is used as the offset address of the data block;

in this embodiment, the first address of each data block-the first address of the shared memory is the offset address of the data block in the shared memory. We take this offset address as the address where the block of data is accessed in shared memory. If a certain data block is offset from 10000 locations of the shared memory, then the first addresses of all mapped shared memories are offset from 10000 locations, and the address space range of [10000, 10000+4 × 1024 × 1024) is the space of the data block. Based on this, we can build the related data structure in the shared memory by this method. The address space of the next data block is immediately followed by the previous address space calculation, i.e. the previous address space is 10000, then the next address space starts at a position of 10000+4 × 1024 × 1024.

S103: establishing a corresponding LRU management queue and a data block ID query queue for each physical disk, wherein the data block ID query queue comprises IDs of data blocks which are arranged in order, and the LRU management queue uses a priority elimination algorithm;

s104: establishing a management node for the data block, the management node comprising a plurality of first pointers, the first pointers pointing to adjacent nodes in the LRU management queue or to adjacent nodes in the data block ID query queue or to address pointers for data blocks in the data area or to IDs characterizing data blocks in the physical disk or to characterize priorities in the LRU management queue; the address of the first pointer is an offset address based on a first address of the shared memory, and the address pointer of the data block is used for representing the offset address of the data block;

s105: establishing a manager for each of the physical disks, the manager having stored therein a plurality of second pointers, the second pointers pointing to a first or last node of the LRU management queue or pointing to a first or last node of the data block ID query queue or characterizing a number of nodes in the LRU management queue or characterizing a number of nodes in the data block ID query queue or characterizing an ID of the physical disk;

s106: and traversing or using the data block ID to query the queue in a skip list mode according to the management node and the manager.

Skip list refers to a linked list to which forward pointers are added. Skip lists are a type of linked list, but look up blocks of linked lists of speed ratios for data in ordered sets. The skip list is compatible with the functions of the array and the linked list. The use efficiency is high. The skip list is called as skip list, which is a randomized data structure, and is an ordered linked list that can be searched by binary. The skip list is added with a multi-level index on the original ordered linked list, and the fast search is realized through the index. The skip list can not only improve the search performance, but also improve the performance of the insert and delete operations.

We know that each node in the linked list includes pointers to next and prev. Our next and prev pointers here are not pointers to the virtual address of the actual current process context, but rather to an offset (offset address) of the shared memory based on the starting address of the current process (i.e., the first address of the shared memory).

In the actual linked list operation process, the starting address + the offset of each process is the address pointer of the actual node of the current process, and the data of the node can be accessed through the actual address pointer.

Based on this, the management method for the data block provided by this embodiment effectively improves the efficiency of searching for the data block by the cooperative use of the data block ID query queue, the LRU management queue, and the management node; and traversing or using the LRU management queue and/or the data block ID query queue in a table-skipping manner, thereby avoiding the performance problem of data movement caused by using an array manner and improving the data processing efficiency.

Optionally, the first pointer is an LRU _ prev pointer pointing to a previous node of the LRU management queue, or is a LRU _ next pointer pointing to a next node of the LRU management queue, or is a bid _ prev pointer pointing to a previous node of the data block ID query queue, or is a bid _ next pointer pointing to a next node of the data block ID query queue, or is an address pointer representing an offset address of a data block of a data area in the shared memory, or is a blob ID field representing an ID of a data block in the physical disk, or is a priv field representing a priority in the LRU management queue;

the address pointer points to an address other than the first address of the shared memory.

Optionally, the second pointer is an LRU _ head pointer pointing to the first node of the LRU management queue, or is a LRU _ tail pointer pointing to the last node of the LRU management queue, or is a bid _ head pointer pointing to the first node of the data block ID query queue, or is a bid _ tail pointer pointing to the last node of the data block ID query queue, or is a LRU _ size field representing the number of nodes in the LRU management queue, or is a bid _ size field representing the number of nodes in the data block ID query queue, or is a did field representing the ID of the physical disk.

Referring to table 1 and table 2 below, table 1 shows the data structure field, and the name and corresponding description of the first pointer; table 2 shows manager field information and its description.

TABLE 1

Data structure field Description of the invention
lru_prev Point to the previous node of the LRU management queue
lru_next Point to the next node of the LRU management queue
bid_prev Front node of data block ID query queue of disk
bid_next Next node of data block ID query queue of disk
block_ptr Address pointer for data block of data region in shared memory
blockid Corresponding data block id on hard disk
priv Priority of LRU management queue

For the LRU _ prev pointer, which points to the previous node of the LRU management queue, the first node has a value of 0, the address of the LRU _ prev pointer is based on the offset address of the shared memory first address;

for the LRU _ next pointer, pointing to the node behind the LRU management queue, the last node has a value of 0, and the address of the LRU _ next pointer is based on the offset address of the shared memory first address;

for the bid _ prev pointer, the value of the bid _ prev pointer points to the front node of the data block ID query queue of the disk, the first node is 0, and the address of the bid _ prev pointer is based on the offset address of the first address of the shared memory;

for the bid next pointer, which points to the next node of the disk's data block ID query queue, the last node has a value of 0, and the address of the bid next pointer is based on the offset address of the shared memory first address.

For an address pointer (block _ ptr), which characterizes an address pointer for a data block of a data region in the shared memory, the address of the address pointer is based on an offset address of a first address of the shared memory;

for the block ID field, it corresponds to the data block ID on the hard disk, the value is initialized to-1, when the value is greater than or equal to 0, it indicates that the data block ID corresponding to the disk is pointed to, and in the data block ID query queue, it is also based on the value as an ordered set.

For the priv field, it represents the priority of LRU management queue, and the priority is used as the elimination criterion, when the priv value is 0, it indicates that elimination can be performed from the queue, and it must be at the tail of LRU management queue. This value is incremented by 1 each time the user uses it, and the node is placed at the head of the LRU management queue. If the value is greater than 0 at the tail, firstly-1, if the value is still greater than 0, the node is continuously put to the head of the queue as a new round of elimination, and if the value is 0, the node is indicated to be eliminated and can be deleted from the LRU management queue, and meanwhile, the node is also deleted from the data block ID query queue.

TABLE 2

The following describes a specific possible case of "traversing or querying a queue using the data block ID in a table-hopping manner according to the management node and the manager".

Optionally, traversing the data block ID query queue in a skip list manner according to the management node and the manager includes:

acquiring a bid-head pointer through the manager, and acquiring an offset address of a first node of a data block ID query queue through the bid-head pointer;

calculating the sum of the first address of the shared memory and the offset address of the first node of the data block ID query queue to obtain the address of the first node of the data block ID query queue based on the address space of the current process;

sequentially acquiring a bid _ next pointer and a bid _ prev pointer of a node of the data block ID query queue according to the management node;

and acquiring offset addresses of the adjacent nodes of the nodes according to the acquired bid _ next pointer and bid _ prev pointer of the nodes, and calculating the sum of the acquired offset addresses and the first address of the shared memory to acquire the address of the first adjacent node of the data block ID query queue based on the address space of the current process.

Optionally, the LRU management queue using priority elimination algorithm includes:

obtaining LRU _ head pointer through the manager and obtaining an offset address of a first node of the LRU management queue through the LRU _ head pointer;

calculating the sum of the first address of the shared memory and the offset address of the first node of the LRU management queue to obtain the address of the first node of the LRU management queue based on the address space of the current process, and accessing the first node of the data block ID query queue based on the address of the first node of the data block ID query queue based on the address space of the current process;

sequentially acquiring LRU _ next pointers and LRU _ prev pointers of nodes of the LRU management queue according to the management nodes;

acquiring offset addresses of the node neighbor nodes according to the acquired LRU _ next pointer and LRU _ prev pointer of the nodes, calculating the sum of the acquired offset addresses and the first address of the shared memory to acquire the address of the first node neighbor node of the LRU management queue based on the address space of the current process, and accessing the first node neighbor node of the data block ID query queue based on the address of the first node neighbor node of the data block ID query queue based on the address space of the current process.

Optionally, when the order of the nodes in the LRU management queue needs to be adjusted, the traversing the data block ID query queue in a table-skipping manner according to the management node and the manager includes:

inquiring a data block ID inquiry queue where the data block ID is located in a table skipping mode through the data block ID;

determining a target node through the data block ID query queue;

obtaining lru _ next pointer and lru _ prev pointer of the target node;

after connecting the node pointed by the LRU _ next pointer and the node pointed by the LRU _ prev pointer, the target node is placed at the position of the first node of the LRU management queue to realize the adjustment of the sequence of the nodes in the LRU management queue.

Optionally, when a data block corresponding to a target node needs to be accessed, the querying, according to the management node and the manager, the queue using the data block ID in a skip list manner includes:

acquiring a data address pointer corresponding to the target node in the data block ID query queue;

and calculating the sum of the offset address represented by the data address pointer and the first address of the shared memory, and accessing the data block corresponding to the target node according to the calculation result.

Optionally, when data in the physical disk needs to be stored in the shared memory, the traversing or querying the queue using the LRU management queue and the data block ID in a table-skipping manner according to the management node and the manager includes:

selecting a node with a priv field value of 0 as a storage node at the end of the LRU management queue, if the node with the priv field value of 0 does not exist at the end of the LRU management queue, judging whether the value-1 of the priv field of the node at the end of the LRU management queue is 0, if not, placing the node at the end of the LRU management queue at the position of the first node of the LRU management queue, and performing a new round of elimination process again until a node with the priv field value of 0 is found;

after the storage node is determined, taking the storage node out of the LRU management queue, and simultaneously taking LRU _ size field as the value-1, if the storage node exists in the data block ID query queue at the same time, taking the storage node out of the data block ID query queue, and taking the data block ID query queue as the value-1 of bid _ size field;

determining the first address of a data block pointed by a storage node through an address pointer of the storage node;

storing the data in the physical disk into the determined data block according to the first address of the determined data block;

after the data storage is finished, putting the ID of the data block into a blockid field of a storage node;

storing the storage node in the data block ID query queue according to the ID of the data block, and adding the value +1 of the bid _ size field;

the storage node is placed where the first node of the LRU management queue is located, and the value of the bid _ size field is + 1.

Optionally, when target data in the shared memory needs to be read, the traversing or querying the queue using the LRU management queue and the data block ID in a table-skipping manner according to the management node and the manager includes:

determining a data block ID in a physical disk corresponding to the target data according to the target data, and reading the offset of the data in the data block and the size of the target data;

acquiring a target node corresponding to the determined data block ID in the data block ID query queue through the determined data block ID;

acquiring a data block corresponding to the target node through an address pointer of the target node, wherein the first address of the data block is obtained by the sum of the first address of the shared memory and the offset address represented by the address pointer;

obtaining the determined first address of the target data in the data block according to the sum of the first address of the data block and the offset inside the data block;

and reading the target data according to the first address of the target data.

The following describes a management system of data blocks provided in an embodiment of the present application, and the management system of data blocks described below may be referred to in correspondence with the management method of data blocks described above.

Correspondingly, an embodiment of the present application provides a management system for a data block, which is applied to a shared memory, where the shared memory includes a management area and a data area, and the management system for the data block includes:

the data block dividing module is used for dividing the data area into a plurality of data blocks, and the size of each data block is a preset size;

an address determining module, configured to obtain a first address of the shared memory in a current process after the shared memory is mapped to the current process, and use a difference between the first address of the data block and the first address of the shared memory as an offset address of the data block;

the system comprises a queue creating module, a queue selecting module and a queue selecting module, wherein the queue creating module is used for creating a corresponding LRU management queue and a data block ID query queue for each physical disk, the data block ID query queue comprises the IDs of data blocks which are arranged in order, and the LRU management queue uses a priority elimination algorithm;

a management node module, configured to establish a management node for the data block, where the management node includes a plurality of first pointers, and the first pointers point to adjacent nodes in the LRU management queue or point to adjacent nodes in the data block ID query queue or are address pointers of data blocks in the data area or represent IDs of data blocks in the physical disk or represent priorities in the LRU management queue; the address of the first pointer is an offset address based on a first address of the shared memory, and the address pointer of the data block is used for representing the offset address of the data block;

a manager module, configured to establish a manager for each physical disk, where a plurality of second pointers are stored, where the second pointers point to a first node or a last node of the LRU management queue or point to a first node or a last node of the data block ID query queue or characterize the number of nodes in the LRU management queue or characterize the number of nodes in the data block ID query queue or characterize the ID of the physical disk;

and the queue using module is used for traversing or using the data block ID to query the queue in a skip list mode according to the management node and the manager.

Correspondingly, an embodiment of the present application further provides a storage medium, where a program is stored in the storage medium, and the program is triggered to execute the method for managing a data block according to any of the above embodiments.

To sum up, the embodiment of the present application provides a method, a system and a storage medium for managing a data block, wherein the method for managing a data block effectively improves the efficiency of searching for a data block by using a data block ID query queue, an LRU management queue and a management node in a coordinated manner; and traversing or using the LRU management queue and/or the data block ID query queue in a table-skipping manner, thereby avoiding the performance problem of data movement caused by using an array manner and improving the data processing efficiency.

The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.

The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:实现PCIe设备的缓存一致性的系统、方法和介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类