Method for accelerating reading of storage medium, reading acceleration hardware module and storage

文档序号:809171 发布日期:2021-03-26 浏览:30次 中文

阅读说明:本技术 一种加速读存储介质的方法、读加速硬件模块及存储器 (Method for accelerating reading of storage medium, reading acceleration hardware module and storage ) 是由 陈祥 曹学明 杨颖� 黄朋 杨州 于 2020-12-23 设计创作,主要内容包括:本发明公开了一种加速读存储介质的方法、读加速硬件模块及存储器,接收存储器的FE下发的LBA信息;基于固化在硬件中的查表算法进行查表操作,以获取与LBA信息对应的有效的PMA信息;基于固化在硬件中的地址转换算法,将有效的PMA信息转换为NPA信息,根据NPA信息从存储器的存储介质中读出相应数据。可见,本申请摒弃了FTL的处理方式,采用固化在硬件中的算法对LBA信息进行处理得到NPA信息,经实验发现能够大幅度提升主机的读带宽,使得单位时间内读取的数据显著增大,从而大大提升了读性能。(The invention discloses a method for accelerating reading of a storage medium, a read acceleration hardware module and a memory, which are used for receiving LBA information issued by FE of the memory; performing table look-up operation based on a table look-up algorithm solidified in hardware to acquire effective PMA information corresponding to the LBA information; based on address conversion algorithm solidified in hardware, effective PMA information is converted into NPA information, and corresponding data is read out from storage medium of memory according to NPA information. Therefore, the processing mode of the FTL is abandoned, the algorithm solidified in hardware is adopted to process the LBA information to obtain the NPA information, and experiments show that the reading bandwidth of the host can be greatly improved, so that the data read in unit time is obviously increased, and the reading performance is greatly improved.)

1. A method of accelerating reading of a storage medium, comprising:

receiving LBA information issued by FE of a memory;

performing table look-up operation based on a table look-up algorithm solidified in hardware to acquire effective PMA information corresponding to the LBA information;

and converting the effective PMA information into NPA information based on an address conversion algorithm solidified in hardware, and reading corresponding data from a storage medium of the memory according to the NPA information.

2. A method of accelerating reading of a storage medium according to claim 1, where the step of performing a table lookup operation based on a table lookup algorithm implemented in hardware to obtain valid PMA information corresponding to the LBA information comprises:

acquiring PMA information corresponding to the LBA information by searching an L2P table for representing the mapping relation between the LBA information and the PMA information;

judging whether the PMA information exists in invalid PMA information or not by searching a trim table for representing the invalid PMA information corresponding to the erased data;

if yes, determining that the PMA information is invalid;

and if not, determining that the PMA information is valid for the first time.

3. A method for accelerating the reading of a storage medium according to claim 2, wherein the step of performing a table lookup operation based on a table lookup algorithm implemented in hardware to obtain valid PMA information corresponding to the LBA information further comprises:

after the PMA information is determined to be valid for the first time by searching the trim table, judging whether the PMA information exists in invalid PMA information corresponding to a corrupted data block by searching a remap table for representing the corrupted data block;

if yes, determining that the PMA information is invalid;

and if not, determining that the PMA information is valid for the second time.

4. A method for accelerating the reading of a storage medium according to claim 3, wherein the step of performing a table lookup operation based on a table lookup algorithm implemented in hardware to obtain the valid PMA information corresponding to the LBA information further comprises:

after the PMA information is determined to be valid for the second time by searching the remap table, judging whether the value of the PMA information is smaller than a preset maximum PMA value;

if yes, determining that the PMA information is finally valid;

and if not, determining that the PMA information is invalid.

5. A method for accelerating the reading of a storage medium according to claim 1, where the process of converting valid PMA information into NPA information based on an address conversion algorithm that is fixed in hardware, comprises:

and converting the effective PMA information into the NPA information according to the bit corresponding relation between the PMA information and the NPA information.

6. A method for accelerated reading of a storage medium according to claim 5, characterised in that said PMA information consists of, in order, SuperBlock information, HyperPage information, mau information; the NPA information sequentially comprises block information, page information, lun information, ce information, chan information and mauoff information;

correspondingly, the process of converting the effective PMA information into the NPA information according to the corresponding relationship between the PMA information and each bit of the NPA information comprises the following steps:

disassembling the PMA information according to bits to obtain SuperBlock information, superPage information and mau information;

multiplying the SuperBlock information by a preset coefficient value to obtain block information of the NPA information;

taking the superPage information as the page information of the NPA information;

and according to the bit corresponding relation between the mau information and lun information, ce information, chan information and mauoff information of the NPA information, correspondingly using the bit information of mau information as lun information, ce information, chan information and mauoff information of the NPA information.

7. A read acceleration hardware module, comprising:

the DB processing hardware module is used for triggering the algorithm processing hardware module to process the LBA information to obtain NPA information after receiving the LBA information issued by the FE of the memory;

an algorithm processing hardware module having a table lookup algorithm and an address translation algorithm embodied thereon for performing the steps of the method for accelerated reading of a storage medium according to any of claims 1-6 when the table lookup algorithm and the address translation algorithm are executed in sequence.

8. The read acceleration hardware module of claim 7, wherein the DB processing hardware module and the algorithm processing hardware module are integrated within a BE of the memory;

and the BE includes:

the FPH is respectively connected with the algorithm processing hardware module and a storage medium of the memory and is used for reading corresponding data from the storage medium according to the NPA information and transmitting the corresponding data to the algorithm processing hardware module;

an ADM connected to said algorithm processing hardware module and said FE, respectively, for sending back to said FE corresponding data read from said storage medium.

9. The read acceleration hardware module of claim 7 wherein the L2P table, trim table, and remap table required by the table lookup algorithm are stored in the DDR.

10. A memory comprising FE, BE, storage medium, DDR and a read acceleration hardware module as claimed in any one of claims 7 to 9.

Technical Field

The present invention relates to the field of data reading, and in particular, to a method for accelerating reading of a storage medium, a read acceleration hardware module, and a memory.

Background

With the development of the big data era, the requirement on the data processing speed is higher and higher. In the data processing process, a series of operations including data reading, data scanning, data analysis and the like are included. For the data reading link, the conventional data reading flow in the prior art is shown in fig. 1, where a host issues a read command to a FE (Front End) of a memory; the FE receives and analyzes the read command, and transmits the analyzed information (including LBA (Logical Block Address)) to an FTL (Flash Translation Layer); the FTL converts the LBA information into NPA (Nand Physical Address) information and sends the NPA information to BE (Back End); after receiving the NPA information, the BE reads data corresponding to the NPA information from the storage medium and sends the data back to the FE, so that the FE returns the data to the host, thereby completing the data reading process. However, the conventional FTL has problems of high latency and low efficiency due to its processing method, resulting in a decrease in read performance.

Therefore, how to provide a solution to the above technical problem is a problem that needs to be solved by those skilled in the art.

Disclosure of Invention

The invention aims to provide a method for accelerating reading of a storage medium, a reading acceleration hardware module and a memory, which abandon the processing mode of FTL, process LBA information by adopting an algorithm solidified in hardware to obtain NPA information, and can greatly improve the reading bandwidth of a host computer through experimental discovery, so that the data read in unit time is obviously increased, and the reading performance is greatly improved.

In order to solve the above technical problem, the present invention provides a method for accelerating reading of a storage medium, comprising:

receiving LBA information issued by FE of a memory;

performing table look-up operation based on a table look-up algorithm solidified in hardware to acquire effective PMA information corresponding to the LBA information;

and converting the effective PMA information into NPA information based on an address conversion algorithm solidified in hardware, and reading corresponding data from a storage medium of the memory according to the NPA information.

Preferably, the process of performing a table lookup operation based on a table lookup algorithm solidified in hardware to obtain valid PMA information corresponding to the LBA information includes:

acquiring PMA information corresponding to the LBA information by searching an L2P table for representing the mapping relation between the LBA information and the PMA information;

judging whether the PMA information exists in invalid PMA information or not by searching a trim table for representing the invalid PMA information corresponding to the erased data;

if yes, determining that the PMA information is invalid;

and if not, determining that the PMA information is valid for the first time.

Preferably, the process of performing table lookup operation based on a table lookup algorithm solidified in hardware to obtain valid PMA information corresponding to the LBA information further includes:

after the PMA information is determined to be valid for the first time by searching the trim table, judging whether the PMA information exists in invalid PMA information corresponding to a corrupted data block by searching a remap table for representing the corrupted data block;

if yes, determining that the PMA information is invalid;

and if not, determining that the PMA information is valid for the second time.

Preferably, the process of performing table lookup operation based on a table lookup algorithm solidified in hardware to obtain valid PMA information corresponding to the LBA information further includes:

after the PMA information is determined to be valid for the second time by searching the remap table, judging whether the value of the PMA information is smaller than a preset maximum PMA value;

if yes, determining that the PMA information is finally valid;

and if not, determining that the PMA information is invalid.

Preferably, the process of converting the effective PMA information into NPA information based on an address conversion algorithm solidified in hardware comprises:

and converting the effective PMA information into the NPA information according to the bit corresponding relation between the PMA information and the NPA information.

Preferably, the PMA information sequentially consists of SuperBlock information, superPage information and mau information; the NPA information sequentially comprises block information, page information, lun information, ce information, chan information and mauoff information;

correspondingly, the process of converting the effective PMA information into the NPA information according to the corresponding relationship between the PMA information and each bit of the NPA information comprises the following steps:

disassembling the PMA information according to bits to obtain SuperBlock information, superPage information and mau information;

multiplying the SuperBlock information by a preset coefficient value to obtain block information of the NPA information;

taking the superPage information as the page information of the NPA information;

and according to the bit corresponding relation between the mau information and lun information, ce information, chan information and mauoff information of the NPA information, correspondingly using the bit information of mau information as lun information, ce information, chan information and mauoff information of the NPA information.

In order to solve the above technical problem, the present invention further provides a read acceleration hardware module, including:

the DB processing hardware module is used for triggering the algorithm processing hardware module to process the LBA information to obtain NPA information after receiving the LBA information issued by the FE of the memory;

and the algorithm processing hardware module solidified with a table look-up algorithm and an address conversion algorithm is used for realizing the steps of any method for accelerating the reading of the storage medium when the table look-up algorithm and the address conversion algorithm are executed in sequence.

Preferably, the DB processing hardware module and the algorithm processing hardware module are integrated within the BE of the memory;

and the BE includes:

the FPH is respectively connected with the algorithm processing hardware module and a storage medium of the memory and is used for reading corresponding data from the storage medium according to the NPA information and transmitting the corresponding data to the algorithm processing hardware module;

an ADM connected to said algorithm processing hardware module and said FE, respectively, for sending back to said FE corresponding data read from said storage medium.

Preferably, the L2P table, trim table and remap table required by the table lookup algorithm are stored in the DDR.

In order to solve the above technical problem, the present invention further provides a memory, which includes FE, BE, storage medium, DDR and any of the above read acceleration hardware modules.

The invention provides a method for accelerating the reading of a storage medium, which comprises the steps of receiving LBA information issued by an FE (field programmable gate array) of a storage; performing table lookup operation based on a table lookup algorithm solidified in hardware to acquire effective PMA information corresponding to the LBA information through table lookup; based on the address translation algorithm solidified in hardware, the effective PMA information is converted into NPA information, so that the BE of the memory reads corresponding data from the storage medium of the memory according to the NPA information and sends the corresponding data back to the FE. Therefore, the processing mode of the FTL is abandoned, the algorithm solidified in hardware is adopted to process the LBA information to obtain the NPA information, and experiments show that the reading bandwidth of the host can be greatly improved, so that the data read in unit time is obviously increased, and the reading performance is greatly improved.

The invention also provides a read acceleration hardware module and a memory, which have the same beneficial effects as the acceleration method.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed in the prior art and the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.

FIG. 1 is a schematic diagram of a data reading process in the prior art;

FIG. 2 is a flowchart of a method for accelerating reading of a storage medium according to an embodiment of the present invention;

FIG. 3 is a flowchart illustrating a method for accelerating reading of a storage medium according to an embodiment of the present invention;

fig. 4 is a diagram illustrating a bit mapping relationship between PMA information and NPA information according to an embodiment of the present invention;

fig. 5 is a schematic structural diagram of a read acceleration hardware module according to an embodiment of the present invention.

Detailed Description

The core of the invention is to provide a method for accelerating reading of a storage medium, a read accelerating hardware module and a memory, which abandon the processing mode of FTL, process the LBA information by adopting an algorithm solidified in hardware to obtain the NPA information, and experiments show that the read bandwidth of a host can be greatly improved, so that the data read in unit time is obviously increased, and the read performance is greatly improved.

In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Referring to fig. 2, fig. 2 is a flowchart illustrating a method for accelerating reading of a storage medium according to an embodiment of the present invention.

The method for accelerating the reading of the storage medium comprises the following steps:

step S1: and receiving the LBA information issued by the FE of the memory.

It should be noted that the method for accelerating reading of the storage medium of the present application is implemented by a read acceleration hardware module (referred to as RACC).

Specifically, the host issues a read command (containing LBA information) to the FE of the memory; the FE of the memory analyzes the read command issued by the host to obtain the LBA information, and issues the LBA information to the RACC; and the RACC receives the LBA information issued by the FE of the memory and starts to enter the processing flow of the LBA information.

Step S2: and performing table lookup operation based on a table lookup algorithm solidified in hardware to acquire effective PMA information corresponding to the LBA information.

Specifically, a table lookup algorithm for acquiring effective PMA (Physical Media Address) information corresponding to the LBA information is cured in advance in hardware of the RACC, so that after receiving the LBA information issued by the FE of the memory, the RACC performs a table lookup operation based on the table lookup algorithm cured in the hardware, and aims to acquire the effective PMA information corresponding to the LBA information by table lookup to subsequently enter an Address conversion process of the NPA information.

Step S3: based on address conversion algorithm solidified in hardware, effective PMA information is converted into NPA information, and corresponding data is read out from storage medium of memory according to NPA information.

Specifically, the hardware of the RACC is also pre-fixed with an address translation algorithm for translating PMA information into NPA information, so after acquiring valid PMA information, the RACC translates the valid PMA information into NPA information based on the address translation algorithm fixed in the hardware, and then sends the NPA information to the BE of the memory. The BE of the memory reads corresponding data from the storage medium of the memory according to the NPA information and sends the corresponding data back to the FE of the memory, so that the FE of the memory returns the data read from the storage medium to the host, and the data reading flow is completed.

In addition, the read command issued by the FE of the memory by the host may further include information such as namespace ID (namespace ID), portId (port ID), dataFormat (format of read data), and the FE of the memory issues the LBA information and information such as namespace ID, portId, dataFormat to the RACC. After processing the LBA information into NPA information, the RACC sends the NPA information and the information such as namespace id, portId, dataFormat to the BE of the memory together, and the BE of the memory reads corresponding data from the storage medium of the memory and sends the corresponding data back to the FE of the memory according to the NPA information and the information such as namespace id, portId, dataFormat, and the like, so that the reading speed is accelerated and multiple namespaces, multiple dataformats and multiple ports are supported at the same time.

The invention provides a method for accelerating the reading of a storage medium, which comprises the steps of receiving LBA information issued by an FE (field programmable gate array) of a storage; performing table lookup operation based on a table lookup algorithm solidified in hardware to acquire effective PMA information corresponding to the LBA information through table lookup; based on the address translation algorithm solidified in hardware, the effective PMA information is converted into NPA information, so that the BE of the memory reads corresponding data from the storage medium of the memory according to the NPA information and sends the corresponding data back to the FE. Therefore, the processing mode of the FTL is abandoned, the algorithm solidified in hardware is adopted to process the LBA information to obtain the NPA information, and experiments show that the reading bandwidth of the host can be greatly improved, so that the data read in unit time is obviously increased, and the reading performance is greatly improved.

On the basis of the above-described embodiment:

referring to fig. 3, fig. 3 is a flowchart illustrating a method for accelerating reading of a storage medium according to an embodiment of the present invention.

As an alternative embodiment, the process of performing table lookup operation based on a table lookup algorithm solidified in hardware to obtain valid PMA information corresponding to LBA information includes:

acquiring PMA information corresponding to the LBA information by searching an L2P table for representing the mapping relation between the LBA information and the PMA information;

judging whether the PMA information exists in the invalid PMA information by searching a trim table for representing the invalid PMA information corresponding to the erased data;

if yes, determining that the PMA information is invalid;

if not, determining that the PMA information is valid for the first time.

Specifically, the present application provides an L2P (logical to Physical) table for indicating a mapping relationship between LBA information and PMA information, that is, RACC may find PMA information corresponding to LBA information by looking up the L2P table.

Meanwhile, considering that the data stored in the memory may be erased by the user, the PMA information corresponding to the erased data is invalid and should be filtered out, and the address conversion process of the subsequent NPA information is not entered, the present application is provided with a trim table for indicating the invalid PMA information corresponding to the erased data, which may specifically be: a bit of the trim table correspondingly indicates whether one PMA message is valid, if so, 0 indicates that the corresponding PMA message is invalid, and 1 indicates that the corresponding PMA message is valid. Based on this, after the RACC searches the L2P table to find the PMA information corresponding to the LBA information, it determines whether the found PMA information exists in the invalid PMA information contained in the trim table by searching the trim table; if the found PMA information exists in invalid PMA information contained in the trim table, the found PMA information is invalid and should be filtered out, and the address conversion process of subsequent NPA information is not entered; if the found PMA information does not exist in invalid PMA information contained in the trim table, the found PMA information is proved to be valid for the first time, and if no other problems exist, the address conversion process of the subsequent NPA information can be entered.

As an alternative embodiment, the process of performing table lookup operation based on a table lookup algorithm solidified in hardware to obtain valid PMA information corresponding to LBA information further includes:

after the PMA information is determined to be valid for the first time by searching the trim table, judging whether the PMA information exists in invalid PMA information corresponding to a corrupted data block by searching a remap table for representing the corrupted data block;

if yes, determining that the PMA information is invalid;

if not, determining that the PMA information is valid for the second time.

Further, considering that a data block used for storing data in the memory may be damaged, PMA information corresponding to the damaged data block is invalid and should be filtered out, and the address conversion process of subsequent NPA information is not entered, the present application is provided with a remap table for indicating the damaged data block. Based on this, after the RACC determines that the PMA information is firstly valid by searching the trim table, whether the firstly valid PMA information exists in invalid PMA information corresponding to the broken data block is judged by searching the remap table; if the first effective PMA information exists in the invalid PMA information corresponding to the broken data block, the first effective PMA information is invalid and filtered, and the address conversion process of the subsequent NPA information is not started; if the first valid PMA information does not exist in the invalid PMA information corresponding to the broken data block, it indicates that the first valid PMA information is still valid at present, and if there is no other problem, the address translation process of the subsequent NPA information can be entered.

As an alternative embodiment, the process of performing table lookup operation based on a table lookup algorithm solidified in hardware to obtain valid PMA information corresponding to LBA information further includes:

after the PMA information is determined to be valid for the second time by searching the remap table, judging whether the value of the PMA information is smaller than a preset maximum PMA value or not;

if yes, determining that the PMA information is finally effective;

if not, determining that the PMA information is invalid.

Further, considering that the value of the PMA information has a maximum value, if the value of the PMA information found by the L2P table is greater than the maximum value, it is described that the found PMA information is abnormal, that is, the PMA information is INVALID and should be filtered out, and the address conversion process of the subsequent NPA information is not performed, so the present application reasonably sets the maximum PMA value according to the actual situation in advance, for example, when the PMA information is 32 bits, the maximum PMA value is set to maxU32-5(maxU 32: a binary maximum value of 32 bits is converted into a decimal value, and then the result of subtracting 5 from the converted decimal value is taken as the maximum PMA value, and 5 reserved for the determination of specific PMA such as UNMAP, UNC, DEBUG, INVALID, TRIM and TRIM, it needs to be described that the reserved value can be adjusted according to the actual situation). Based on this, after the RACC determines that the PMA information is secondarily valid by searching the remap table, whether the numerical value of the secondarily valid PMA information is smaller than the preset maximum PMA numerical value is judged; if the value of the second effective PMA information is not less than the preset maximum PMA value, the second effective PMA information is invalid and is filtered, and the address conversion process of the subsequent NPA information is not started; if the value of the second effective PMA information is smaller than the preset maximum PMA value, the second effective PMA information still belongs to effectiveness, and the address conversion process of the subsequent NPA information can be directly entered.

As an alternative embodiment, the process of converting the effective PMA information into NPA information based on the address conversion algorithm solidified in hardware includes:

and converting the effective PMA information into the NPA information according to the bit corresponding relation between the PMA information and the NPA information.

Specifically, considering that there is a certain correspondence between the PMA information and each bit of the NPA information, the RACC may convert the effective PMA information into the NPA information according to the correspondence between the PMA information and each bit of the NPA information.

As an alternative example, the PMA information is composed of SuperBlock information, superPage information, and mau information in sequence (mau refers to media AU, i.e., refers to the minimum unit of the medium); the NPA information sequentially comprises block information, page information, lun information (lun refers to a logic unit number), ce information (ce refers to chip selection information), chan information (chan refers to a channel) and mauoff information;

correspondingly, the process of converting the effective PMA information into the NPA information according to the corresponding relationship between the PMA information and each bit of the NPA information comprises the following steps:

disassembling the PMA information according to bits to obtain SuperBlock information, superPage information and mau information;

multiplying the SuperBlock information by a preset coefficient value to obtain block information of the NPA information;

using the superPage information as the page information of the NPA information;

and according to the bit corresponding relation between the mau information and the lun information, the ce information, the chan information and the mauoff information of the NPA information, corresponding the bit information of mau information as the lun information, the ce information, the chan information and the mauoff information of the NPA information.

Specifically, as shown in fig. 4, the PMA information sequentially includes SuperBlock information, superPage information, and mau information, the NPA information sequentially includes block information, page information, lun information, ce information, chan information, and mauoff information, and bit bits of the two have a certain correspondence: the SuperBlock information value x preset coefficient value of PMA information is the block information value of NPA information (the preset coefficient value is related to Nand particles, the SuperBlock id in PMA is converted into the block id of the actual physical position in Nand by multiplying the preset coefficient value, and different particle assignments are different.); the superPage information value of the PMA information is equal to the page information value of the NPA information; as shown in fig. 4, when the bit information value of mau information of PMA information is equal to lun information, ce information, chan information, and mauoff information of NPA information, the mauoff information occupies 3 bits, the ce information and chan information occupy 2 bits, and the lun information occupies 1bit (the number of bits occupied by these information is not limited to this, and may be configured according to actual grain requirements), the mauoff information value is equal to an information value composed of 2, 1, and 0 bits of mau information, the chan information value is equal to an information value composed of 4 and 3 bits of mau information, the ce information value is equal to an information value composed of 6 and 5 bits of mau information, and the lun information value is equal to an information value of 7 bits of mau information.

Based on this, the conversion flow of converting the PMA information into the NPA information is as follows: and disassembling the PMA information according to bits to obtain SuperBlock information, superPage information and mau information. And multiplying the SuperBlock information by a preset coefficient value to obtain block information of the NPA information. The superPage information is used as the page information of the NPA information. According to the bit correspondence relationship between mau information and the lun information, ce information, chan information, mauoff information of NPA information, the bit information of mau information is used as the lun information, ce information, chan information, mauoff information of NPA information, specifically, the mauoff information, chan information, lun information of NPA information, that is, the same value as the bit occupied by the mauoff information at the end of mau information is used as the mauoff information value through a shift processing mode (as shown in FIG. 4, the mauoff information occupies 3 bits, the same value as the bit occupied by the mauoff information at the end of mau information is used as the mauoff information value, that is, the value of 2, 1, 0 bits at the end of mau information is used as the mauoff information value); shifting the mau information to the right to remove the value of mau information which is the same as the bit occupied by mauoff information (as shown in fig. 4, removing the value of mau information which is the same as the bit occupied by mauoff information means to remove the last 2, 1, 0bit of mau information, at this time, the original 5, 4, 3bit of mau information replaces the original 2, 1, 0bit of mau information), and the value of mau information which is removed and is the same as the bit occupied by chan information is taken as chan information value (as shown in fig. 4, chan information occupies 2bit, namely, the current last 4, 3bit value of mau information is taken as chan information value, and the following similar reason is that the application is not described in detail); shifting the mau information to the right to remove the value of the bit of the mau information which is the same as the bit occupied by the chan information (as shown in fig. 4, the current last 4 and 3 bits of the mau information are removed, at this time, the original 6 and 5 bits of the mau information replace the current last 4 and 3 bits of the mau information, and the following same reason is not described in detail in the application), and taking the value of the removed mau information which is the same as the bit occupied by the ce information as the value of the ce information; the mau information is shifted to the right continuously to remove the last value of mau information which is the same as the bit occupied by the ce information, and the last value of mau information which is removed and is the same as the bit occupied by the lun information is used as the lun information value, so that the NPA information is obtained.

In summary, compared with the conventional method, the method for accelerating the reading of the storage medium has the advantages that the reading bandwidth of the host is significantly increased, and under the condition that the CPU is 5M, the measured data: host reading is processed in a traditional mode, and the actually measured bandwidth is 2000 KiB/s; the method for accelerating the reading of the storage medium processes host reading, and the actual measurement bandwidth is 3999 KiB/s.

Referring to fig. 5, fig. 5 is a schematic structural diagram of a read acceleration hardware module according to an embodiment of the present invention.

The read acceleration hardware module includes:

the DB processing hardware module 1 is used for triggering the algorithm processing hardware module 2 to process the LBA information to obtain the NPA information after receiving the LBA information issued by the FE of the memory;

and the algorithm processing hardware module 2 solidified with the table look-up algorithm and the address conversion algorithm is used for realizing any one of the steps of the method for accelerating the reading of the storage medium when the table look-up algorithm and the address conversion algorithm are executed in sequence.

Specifically, the read acceleration hardware module (referred to as RACC) of the present application includes a DB (DoorBell ) processing hardware module 1 and an algorithm processing hardware module 2, where the DB processing hardware module 1 triggers the algorithm processing hardware module 2 to perform address processing work after receiving LBA information issued by an FE of a memory, and the algorithm processing hardware module 2 mainly processes the LBA information to obtain NPA information, and a specific processing principle of the processing principle refers to an embodiment of the above method for accelerating the read of the storage medium, which is not described herein again.

As an alternative embodiment, the DB processing hardware module 1 and the algorithm processing hardware module 2 are integrated within the BE of the memory;

and the BE includes:

the FPH is respectively connected with the algorithm processing hardware module 2 and the storage medium of the memory and is used for reading corresponding data from the storage medium according to the NPA information and transmitting the corresponding data to the algorithm processing hardware module 2;

an ADM, connected to the algorithm processing hardware module 2 and to the FE respectively, is used to send back to the FE the corresponding data read from the storage medium.

Specifically, as shown in fig. 5, the RACC of the present application may BE integrated into a BE of a memory, where the BE of the memory includes an ADM (Advanced Data Management) and an FPH (Flash Protocol handler), the BE of the memory reads corresponding Data from a storage medium according to NPA information by the FPH and transfers the Data to the algorithm processing hardware module 2, and the ADM sends the corresponding Data read from the storage medium back to the FE of the memory, so that the FE of the memory returns the corresponding Data read from the storage medium to a host.

If the FPH finds that UNC (uncorrectable error) exists in the read data when reading data from the storage medium, the data is not returned to the arithmetic processing hardware module 2, and a data read exception is confirmed. In addition, an unmap error may also exist, that is, data is not written in the storage medium corresponding to the converted NPA information, and a data reading flow cannot be performed to confirm data reading abnormality.

As an alternative embodiment, the L2P table, the trim table, and the remap table required by the table lookup algorithm are stored in the DDR.

Specifically, the L2P table, the trim table, and the remap table required by the table lookup algorithm of the present application may be stored in a DDR (Double Data Rate), and the algorithm processing hardware module 2 interacts with the DDR to complete the table lookup operations of the L2P table, the trim table, and the remap table.

The application also provides a memory, which comprises FE, BE, a storage medium, DDR and any one of the read acceleration hardware modules.

For the introduction of the memory provided in the present application, reference is made to the above-mentioned embodiment of the read acceleration hardware module, which is not described herein again.

It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于定制TLB代换的虚拟机迁移时脏页记录方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类