Data caching method, device, chip and storage medium

文档序号:1831073 发布日期:2021-11-12 浏览:12次 中文

阅读说明:本技术 一种数据缓存方法、装置、芯片和存储介质 (Data caching method, device, chip and storage medium ) 是由 顾泓 刘衡祁 仲建锋 胡达 周峰 于 2020-04-27 设计创作,主要内容包括:本申请实施例公开了一种数据缓存方法、装置、芯片和存储介质。该方法包括:获取数据缓存请求;根据各缓存单元的当前剩余缓存空间确定目标缓存单元;将所述数据缓存请求中的待缓存数据缓存至所述目标缓存单元。相比现有技术,本方案根据各缓存单元当前剩余的缓存空间确定目标缓存单元,实现了缓存空间的最大化利用。(The embodiment of the application discloses a data caching method, a data caching device, a chip and a storage medium. The method comprises the following steps: acquiring a data caching request; determining a target cache unit according to the current residual cache space of each cache unit; and caching the data to be cached in the data caching request to the target caching unit. Compared with the prior art, the scheme determines the target cache unit according to the current residual cache space of each cache unit, and realizes the maximum utilization of the cache space.)

1. A method for caching data, comprising:

acquiring a data caching request;

determining a target cache unit according to the current residual cache space of each cache unit;

and caching the data to be cached in the data caching request to the target caching unit.

2. The method of claim 1, wherein determining the target cache unit according to the current remaining cache space of each cache unit comprises:

determining the size of the current residual cache space of each cache unit, and sequentially arranging each cache unit;

and determining a target cache unit according to the sorting result.

3. The method of claim 1, further comprising:

creating an address index, wherein the address index comprises a message information maintenance table and a message linked list;

the message information maintenance table is used for storing message packet information corresponding to the data to be cached;

the message linked list is used for message fragment information corresponding to the message packet.

4. The method according to claim 3, further comprising, after caching the data to be cached in the data caching request to the target cache unit:

and updating the message information maintenance table and the message linked list so as to read the data to be cached according to the updated message information maintenance table and the updated message linked list.

5. The method according to any of claims 1-4, further comprising, before determining the target cache location based on the current remaining cache space of each cache location:

and dividing the cache space for storing the data to obtain cache units.

6. A data caching apparatus, comprising:

the acquisition module is used for acquiring a data cache request;

the determining module is used for determining a target cache unit according to the current residual cache space of each cache unit;

and the caching module is used for caching the data to be cached in the data caching request to the target caching unit.

7. The apparatus of claim 6, wherein the determining module is specifically configured to:

determining the size of the current residual cache space of each cache unit, and sequentially arranging each cache unit;

and determining a target cache unit according to the sorting result.

8. The apparatus of claim 6, further comprising:

the system comprises a creating module, a sending module and a processing module, wherein the creating module is used for creating an address index, and the address index comprises a message information maintenance table and a message linked list;

the message information maintenance table is used for storing message packet information corresponding to the data to be cached;

the message linked list is used for message fragment information corresponding to the message packet.

9. A chip, comprising:

one or more processors;

a memory for storing one or more programs;

the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the data caching method of any one of claims 1-5.

10. A storage medium on which a computer program is stored, which program, when being executed by a processor, carries out the data caching method as claimed in any one of claims 1 to 5.

Technical Field

The embodiment of the application relates to the technical field of data caching, in particular to a data caching method, a data caching device, a data caching chip and a storage medium.

Background

In a digital communication system, the allocation of buffer space of multiple channels is a common problem. If too much cache space is allocated to a channel, resources are wasted, and if too little cache space is allocated, performance of the channel is affected. The good and bad buffer space allocation directly relates to the resource utilization rate of the buffer and the overall performance of the system.

For multi-channel data caching, two modes are mainly adopted at the present stage: one is to allocate the same buffer space for each channel, which leads to resource waste and overall system performance loss under the condition of channel bandwidth imbalance; the other method is to pre-configure the buffer space of each channel, which needs to pre-estimate and configure the buffer space in advance according to the bandwidth, and the differentiated configuration reduces the resource waste to a certain extent, but the resource waste is still caused when a certain channel has no data transmission.

Content of application

The embodiment of the application provides a data caching method, a data caching device, a chip and a storage medium, so that the maximum utilization of caching resources is realized.

In a first aspect, an embodiment of the present application provides a data caching method, including:

acquiring a data caching request;

determining a target cache unit according to the current residual cache space of each cache unit;

and caching the data to be cached in the data caching request to the target caching unit.

In a second aspect, an embodiment of the present application further provides a data caching apparatus, including:

the acquisition module is used for acquiring a data cache request;

the determining module is used for determining a target cache unit according to the current residual cache space of each cache unit;

and the caching module is used for caching the data to be cached in the data caching request to the target caching unit.

In a third aspect, an embodiment of the present application further provides a chip, including:

one or more processors;

a memory for storing one or more programs;

the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the data caching method as described in the first aspect.

In a fourth aspect, an embodiment of the present invention further provides a storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the data caching method according to the first aspect.

The embodiment of the application provides a data caching method, a data caching device, a chip and a storage medium, wherein a data caching request is obtained; determining a target cache unit according to the current residual cache space of each cache unit; and caching the data to be cached in the data caching request to the target caching unit. Compared with the prior art, the scheme determines the target cache unit according to the current residual cache space of each cache unit, and realizes the maximum utilization of the cache space.

Drawings

Fig. 1 is a flowchart of a data caching method according to an embodiment of the present application;

fig. 2 is a schematic diagram illustrating a cache partition according to an embodiment of the present disclosure;

fig. 3 is a schematic diagram of an address index according to an embodiment of the present application;

fig. 4 is a flowchart of another data caching method according to an embodiment of the present application;

fig. 5 is a schematic diagram illustrating a method for reading cache data according to an embodiment of the present disclosure;

fig. 6 is a structural diagram of a data caching apparatus according to an embodiment of the present application;

fig. 7 is a structural diagram of a chip according to an embodiment of the present disclosure.

Detailed Description

The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures. In addition, the embodiments and features of the embodiments in the present application may be combined with each other without conflict.

Fig. 1 is a flowchart of a data caching method according to an embodiment of the present application, where the present embodiment is applicable to multi-channel data caching in scenarios such as data exchange or processing and forwarding, and the method may be executed by a data caching device, and the device may be implemented in a software and/or hardware manner and may be integrated in a chip, where the chip may be a network processor chip or another chip having the data caching function. Referring to fig. 1, the method comprises the following steps:

and S110, acquiring a data caching request.

Optionally, the data caching request includes information of the type and size of the data to be cached, each type of the data to be cached may be used as a way, each piece of the data to be cached may be used as a way, a channel may be allocated to each way of data when the data caching is performed, for example, 32 pieces of the data to be cached may be allocated, the 32 pieces of data may be recorded as 32 ways, 32 channels are allocated to the data, and the data caching is completed through the 32 channels.

And S120, determining a target cache unit according to the current residual cache space of each cache unit.

The cache unit is a part of a cache, each cache includes a cache space for storing data, the embodiment does not limit the type of the cache, and the cache may be, for example, a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and the number of channels corresponding to the same cache may be set by itself, for example, the cache may correspond to 4 channels, which indicates that the cache is used for storing data of the 4 channels. Optionally, the cache unit may be obtained by dividing a cache space for storing data, and the embodiment does not specifically limit the dividing manner, for example, the cache space may be divided into a coarse granularity and a fine granularity, where the coarse granularity is to perform only horizontal division and keep the cache space unchanged in a longitudinal direction, and the fine granularity is to perform both horizontal division and longitudinal division on the cache space. In this embodiment, each part obtained by horizontal division is referred to as a buffer unit, for example, the buffer space is divided into 8 parts horizontally, which means that the buffer space includes 8 buffer units.

Fig. 2 is a schematic diagram of cache partitioning according to an embodiment of the present disclosure.

Fig. 2 is divided into fine granularity examples, and includes 8 banks in the horizontal direction, which are respectively bank0-bank7, and 2 blocks in the vertical direction, which are respectively block0 and block1, where each bank represents a cache unit, and each block represents a minimum access unit of the cache. The size of each bank and block can be determined according to the size of the cache space and the number of banks and blocks, for example, the depth of the cache space is 16k, the width is 384bytes, the number of banks is 8, the number of blocks is 2, the depth of each bank is 2k, and the width of each block is 192 bytes. Of course, the method is not limited to this division, and for example, the method may be further divided into 16 or 32 banks in the horizontal direction and 2 blocks in the vertical direction. As can be seen from fig. 2, each channel corresponding to the cache can access any cache unit of the cache, so that one channel can use a cache resource of another channel, and the maximum utilization of the cache resource is realized.

The current residual cache space is a space which is not currently used for caching data by the cache unit. The target cache unit is a cache unit for storing data to be cached, for example, if the data to be cached is stored to the bank0, the bank0 is called the target cache unit of the data to be cached. In order to achieve balanced allocation and prevent that multiple channels cannot be simultaneously written and accessed due to excessive use of some cache units, that is, data cannot be simultaneously cached, which affects overall performance, in this embodiment, a target cache unit is determined for data to be cached according to the size of the current remaining cache space of each cache unit. For example, a cache unit with a large current remaining cache space may be allocated to a corresponding channel as a target cache unit.

S130, caching the data to be cached in the data caching request to the target caching unit.

The cache units corresponding to the channels of the same cache may be the same or different. Optionally, when a data cache request corresponding to one channel is received, the data of the channel may be directly stored in the target storage unit, and when data cache requests corresponding to multiple channels are received, a target cache unit may be allocated to each channel according to the size of data to be cached in each channel, for example, a target cache unit with a larger remaining cache space is allocated to a channel with a larger data amount, and a target cache unit with a smaller remaining cache space is allocated to a channel with a smaller data amount, so that each cache unit is balanced, maximum utilization of resources is achieved, and overall performance is improved.

The embodiment of the application provides a data caching method, which comprises the steps of obtaining a data caching request; determining a target cache unit according to the current residual cache space of each cache unit; and caching the data to be cached in the data caching request to the target caching unit. Compared with the prior art, the scheme determines the target cache unit according to the current residual cache space of each cache unit, and realizes the maximum utilization of the cache space.

On the basis of the foregoing embodiment, optionally, the method further includes:

creating an address index, wherein the address index comprises a message information maintenance table and a message linked list;

the message information maintenance table is used for storing message packet information corresponding to the data to be cached;

the message linked list is used for message fragment information corresponding to the message packet.

The data to be cached in the data caching request is usually sent in the form of a packet, and each channel may include one packet or may include a plurality of packets. The packet information is information of packets included in each channel, and may be, for example, the number, length, first address, and corresponding channel number of the packet. Each packet may include a plurality of packet fragments, and a packet fragment may be considered as a data fragment, and different packet fragments may be stored in blocks of different banks during storage, or may be stored in different blocks of the same bank. The message fragment information is information of each message fragment in the same message packet, and may be, for example, address information of each message fragment, where a first address in the message packet information is the same as a first address in the message fragment information, and the address of each message fragment in the message packet may be obtained by the first address of the message packet.

Fig. 3 is a schematic diagram of an address index according to an embodiment of the present application.

Two channels are exemplarily given in the message information maintenance table, which are Channel0 and Channel1, respectively, four message packets are exemplarily given in Channel0, which are pkt0, pkt1, pkt2 and pkt3, respectively, four message packets are exemplarily given in Channel1, which are pkt0, pkt1, pkt2, pkt3, and 1st addr is a head address of each message packet. The address of each message fragment of one message packet is exemplarily given in the message linked list, the initial address of a certain message packet is obtained through the message information maintenance table, the addresses of other message fragments in the message packet can be obtained through searching the message linked list, and the complete message packet can be obtained by splicing each message fragment. Optionally, the addresses of the message fragments on the same chain in the message linked list may be arranged from front to back according to the sequence of the message fragments in the message packet, and when data needs to be read, the splicing time of the message fragments may be reduced.

Fig. 4 is a flowchart of another data caching method according to an embodiment of the present application.

S210, obtaining a data caching request.

And S220, dividing the cache space for storing the data to obtain cache units.

By dividing the cache space, each channel can access a single cache unit, thereby realizing the fragmented utilization of the cache space and improving the space utilization rate. The division process can refer to the foregoing, and is not described in detail here.

And S230, determining the size of the current residual cache space of each cache unit, and sequentially arranging each cache unit.

Each bank corresponds to a First-in First-out (FIFO) memory, the FIFO memory is used for storing and managing the address of the bank, the depth of the FIFO memory is the same as the depth of the corresponding bank, and the FIFO memory is initially in a full state, that is, the corresponding bank does not store data. When data needs to be stored into a certain bank, an address is applied to the FIFO memory, the number of the addresses in the FIFO memory is reduced, when the data in the bank is taken out, the address is recycled into the FIFO memory, the number of the addresses in the FIFO memory is increased, the number of the remaining addresses in the FIFO memory is smaller, the smaller the number of the remaining addresses in the corresponding bank is, the smaller the remaining cache space of the corresponding bank is, and therefore the size of the remaining cache space of each bank can be determined by monitoring the number of the remaining addresses of each FIFO memory. And when the cache space occupied by a certain channel is monitored to reach a set threshold value, stopping continuously caching data through the channel to prevent overflow. And arranging the banks according to the current residual cache space of the banks from large to small or from small to large to obtain the sorted banks.

And S240, determining a target cache unit according to the sorting result.

Taking the cache corresponding to 4 channels as an example, for each channel, the data of the channel is cached to a certain block in a certain bank according to a destination address, and the destination address is an address corresponding to the target cache unit. Assuming that the first four banks sorted from large to small according to the remaining buffer space are respectively bank _ flag3, bank _ flag2, bank _ flag1 and bank _ flag0, when a data buffer request of a channel is received, the buffer unit corresponding to bank _ flag3 can be used as a target buffer unit and allocated to the channel, when data buffer requests of two channels are received, the buffer units corresponding to bank _ flag3 and bank _ flag2 are used as target buffer units and allocated to the two channels, when data buffer requests of three channels are received, the buffer units corresponding to bank _ flag3, bank _ flag2 and bank _ flag1 are used as target buffer units and allocated to the three channels, and when data buffer requests of the four channels are received, the buffer units corresponding to bank _ flag3, bank _ flag2, bank _ flag1 and bank _ flag0 are used as target buffer units and allocated to the four channels. By preferentially scheduling write access to the bank with the largest residual cache space, the cache space of each bank is balanced, the maximum utilization of resources is facilitated, and the overall performance of the system is improved.

And S250, caching the data to be cached in the data caching request to the target caching unit.

And S260, updating the message information maintenance table and the message linked list so as to read the data to be cached according to the updated message information maintenance table and the updated message linked list.

When data caching occurs, synchronizing the address corresponding to the target caching unit to the message linked list, and updating the message information maintenance table of the corresponding channel after the storage of the message packet corresponding to the data is completed, so as to update the message linked list and the message information maintenance table. When a data reading request is received, the addresses of the message fragments are obtained by searching the message chain table and the message information maintenance table, and then the message packet is obtained by reading and caching. It should be noted that the address non-conflict principle should be followed when reading the cached data. The address conflict refers to the condition that the same address is accessed by two or more access sources at the same time. The access source is hardware or software with the right to read a certain cache.

Fig. 5 is a schematic diagram of reading cache data according to an embodiment of the present disclosure.

Assume that there are three read ports that can read the data of a certain cache, p0, p1 and p2, wherein, read port p0 is used for reading the data of block0, read port p1 is used for reading the data of block1, and read port p2 is used for reading the data of block0 and block 1. The Access source of the cached data may be read to Output Direct Memory Access (ODMA), a Search Engine (SE), a Central Processing Unit (CPU), and a Programmable Processing Unit (PPU), for example. The ODMA may read a block, such as block0 or block1, or may read both block0 and block 1. SE and CPU can only read one block, such as block0 or block 1. The PPU may read block0 and block1 simultaneously. ODMA, SE and CPU share read port p0 and read port p1, ODMA, PPU, SE and CPU have sequentially lower priority, PPU solely shares read port p 2. When a plurality of access sources simultaneously perform read access to a certain read port, the access source with the highest priority should be preferentially allocated to the access source under the condition that the addresses are not conflicted.

Referring to fig. 5, as for the read port p0, access sources that may participate in allocation are ODMA, SE, and CPU. ODMA is highest priority, and when ODMA initiates read block0, read port p0 is preferentially allocated to ODMA. The SE obtains the read port allocation right to meet the requirement that no ODMA initiates reading block0 and no PPU initiates reading or block0 read by PPU has no address conflict, and the CPU obtains the read port allocation right to meet the requirement that no ODMA initiates reading block0, no SE initiates reading block0 or no SE initiates reading block0 but has address conflict with block0 read by PPU, no PPU initiates reading or block0 read by PPU has no address conflict.

For the read port p1, the possible access sources participating in the assignment are ODMA, SE, and CPU. ODMA is highest priority, and when ODMA initiates read block1, read port p1 is preferentially allocated to ODMA. The SE obtains the read port allocation right to meet the requirement that no ODMA initiates reading block1 and no PPU initiates reading or block1 read by PPU has no address conflict, and the CPU obtains the read port allocation right to meet the requirement that no ODMA initiates reading block1, no SE initiates reading block1 or no SE initiates reading block1 but has address conflict with block1 read by PPU, no PPU initiates reading or block1 read by PPU has no address conflict.

For read port p2, there is no priority issue because of PPU's exclusive sharing, but it is required that block0 and block1 have no address conflict when ODMA initiates reading.

Fig. 6 is a structural diagram of a data caching apparatus according to an embodiment of the present application, where the apparatus may execute the data caching method according to the foregoing embodiment, and referring to fig. 6, the apparatus includes:

an obtaining module 31, configured to obtain a data caching request;

a determining module 32, configured to determine a target cache unit according to a current remaining cache space of each cache unit;

the caching module 33 is configured to cache the data to be cached in the data caching request to the target caching unit.

The embodiment of the application provides a data caching device, which is used for caching a request by acquiring data; determining a target cache unit according to the current residual cache space of each cache unit; and caching the data to be cached in the data caching request to the target caching unit. Compared with the prior art, the scheme determines the target cache unit according to the current residual cache space of each cache unit, and realizes the maximum utilization of the cache space.

On the basis of the foregoing embodiment, the determining module 32 is specifically configured to:

determining the size of the current residual cache space of each cache unit, and sequentially arranging each cache unit;

and determining a target cache unit according to the sorting result.

On the basis of the above embodiment, the apparatus further includes:

the system comprises a creating module, a sending module and a processing module, wherein the creating module is used for creating an address index, and the address index comprises a message information maintenance table and a message linked list;

the message information maintenance table is used for storing message packet information corresponding to the data to be cached;

the message linked list is used for message fragment information corresponding to the message packet.

On the basis of the above embodiment, the apparatus further includes:

and the updating module is used for updating the message information maintenance table and the message linked list after the data to be cached in the data caching request is cached to the target caching unit so as to read the data to be cached according to the updated message information maintenance table and the updated message linked list.

On the basis of the above embodiment, the apparatus further includes:

and the dividing module is used for dividing the cache space for storing the data to obtain the cache units before determining the target cache unit according to the current residual cache space of each cache unit.

The data caching device provided by the embodiment of the application can execute the data caching method in the embodiment, and has corresponding functional modules and beneficial effects of the execution method.

Fig. 7 is a structural diagram of a chip according to an embodiment of the present disclosure.

Referring to fig. 7, the chip includes: a processor 41 and a memory 42, the number of the processors 41 in the chip may be one or more, and one processor 41 is taken as an example in fig. 7; the processor 41 and the memory 42 may be connected by a bus or other means, and fig. 7 illustrates the connection by a bus as an example.

The memory 42 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the data caching method in the embodiment of the present application. The processor 41 executes various functional applications of the chip and data processing by running software programs, instructions, and modules stored in the memory 42, that is, implements the data caching method of the above-described embodiment.

The memory 42 mainly includes a program storage area and a data storage area, wherein the program storage area can store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 42 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 42 may further include memory located remotely from processor 41, which may be connected to the chip over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.

The chip provided by the embodiment of the present application and the data caching method provided by the above embodiment belong to the same concept, and the technical details that are not described in detail in the embodiment can be referred to the above embodiment, and the embodiment has the same beneficial effects as the data caching method.

Embodiments of the present application further provide a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the data caching method according to the above embodiments of the present application is implemented.

Of course, the storage medium provided in the embodiments of the present application and containing computer-executable instructions is not limited to the operations in the data caching method described above, and may also perform related operations in the data caching method provided in any embodiment of the present application, and has corresponding functions and advantages.

From the above description of the embodiments, it is obvious for those skilled in the art that the present application can be implemented by software and necessary general hardware, and certainly can be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions to enable a computer device (which may be a robot, a personal computer, a server, or a network device) to execute the data caching method according to the foregoing embodiments of the present application.

It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:数据请求优化方法、装置及应答模式通信系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类