Meta information management method, solid state disk controller and solid state disk

文档序号:1906609 发布日期:2021-11-30 浏览:18次 中文

阅读说明:本技术 元信息管理方法、固态硬盘控制器及固态硬盘 (Meta information management method, solid state disk controller and solid state disk ) 是由 方浩俊 黄运新 杨亚飞 于 2021-08-23 设计创作,主要内容包括:本申请实施例涉及固态硬盘应用领域,公开了一种元信息管理方法、固态硬盘控制器及固态硬盘,该方法包括:根据每一字线中的每一数据页的原始比特错误率,确定每一字线中的至少两个数据页中的一个第一数据页和至少一个第二数据页,其中,第一数据页的原始比特错误率小于第二数据页的原始比特错误率;将至少一个第二数据页的部分元信息移动到第一数据页的元信息对应的空间,并将第二数据页中被移动的元信息对应的空间用于增加校验数据长度,以使得每一字线对应的元信息的总体空间大小不变。通过对不同数据页采用不同的校验数据长度,本申请能够提高固态硬盘的整体纠错能力,延长固态硬盘的整体寿命。(The embodiment of the application relates to the field of solid state disk application, and discloses a meta information management method, a solid state disk controller and a solid state disk, wherein the method comprises the following steps: determining a first data page and at least one second data page of at least two data pages in each word line according to an original bit error rate of each data page in each word line, wherein the original bit error rate of the first data page is smaller than that of the second data page; and moving partial meta information of at least one second data page to a space corresponding to the meta information of the first data page, and using the space corresponding to the moved meta information in the second data page to increase the length of the check data, so that the total space size of the meta information corresponding to each word line is not changed. Through adopting different check data length to different data pages, this application can improve solid state disk's whole error correction ability, prolongs solid state disk's whole life-span.)

1. A meta-information management method is applied to a solid state disk, and is characterized in that the solid state disk comprises at least one word line, each word line comprises at least two data pages, each data page comprises at least one error correction unit, wherein the error correction unit comprises: valid data and check data, the valid data comprising: user data and meta information, the method comprising:

determining a first data page and at least a second data page of at least two data pages in each word line according to an original bit error rate of each data page in each word line, wherein the original bit error rate of the first data page is smaller than that of the second data page;

and moving partial meta information of at least one second data page to a space corresponding to the meta information of the first data page, and using the space corresponding to the moved meta information in the second data page to increase the length of the check data so that the total space size of the meta information corresponding to each word line is unchanged.

2. The method of claim 1, further comprising:

acquiring data distribution of original bit error rates of a first data page and a second data page, and acquiring a preset first error correction intensity threshold and a preset second error correction intensity threshold;

determining a first error correction intensity corresponding to the first data page and a second error correction intensity corresponding to the second data page according to the data distribution of the original bit error rates of the first data page and the second data page and the first error correction intensity threshold and the second error correction intensity threshold;

and determining a first adjustment interval of the check data length of the first data page and a second adjustment interval of the check data length of the second data page according to a preset error correction strength conversion rule.

3. The method of claim 2, wherein determining a first error correction strength corresponding to the first data page and a second error correction strength corresponding to the second data page comprises:

determining that a first error correction strength corresponding to the first data page is (a first error correction strength threshold + a redundancy value);

and determining that the second error correction strength corresponding to the second data page is (the error correction strength threshold of the general-type data page + (the error correction strength threshold of the general-type data page-the first error correction strength threshold-the redundancy value)/N), wherein N is the number of the second data pages, and N is a positive integer.

4. The method of claim 2, wherein the error correction strength scaling rule comprises:

the number of correctable bits is equal to the number of bits of check data/preset coefficient.

5. The method of claim 4, wherein determining a first adjustment interval of the check data length of the first data page and a second adjustment interval of the check data length of the second data page according to a preset error correction strength scaling rule comprises:

calculating a first check data bit number corresponding to the first data page, and determining a first adjustment interval;

and calculating the bit number of second check data corresponding to the second data page, and determining a second adjustment interval.

6. The method of claim 2, further comprising:

and determining the increased check data length of the second data page and the decreased check data length of the first data page according to the first adjustment interval of the check data length of the first data page and the second adjustment interval of the check data length of the second data page.

7. The method of claim 5, further comprising:

determining a first check matrix combination corresponding to the first check data length according to the first check data length corresponding to the first data page;

determining a corresponding first operation command symbol according to the first check matrix combination so as to determine a first operation command symbol corresponding to a first data page;

and the number of the first and second groups,

determining a second check matrix combination corresponding to the second check data length according to the second check data length corresponding to the second data page;

determining a corresponding second operation command symbol according to the second check matrix combination so as to determine a second operation command symbol corresponding to a second data page;

after the data page type is identified, issuing a corresponding operation command symbol to perform data read-write operation, wherein the data page type comprises a first data page or a second data page, and the operation command symbol comprises the first operation command symbol or the second operation command symbol.

8. The method according to any one of claims 1 to 7, wherein the meta information comprises a logical address, and the moving the partial meta information of the at least one second data page to a space corresponding to the meta information of the first data page comprises:

and moving the logical address of the at least one second data page to a space corresponding to the meta-information of the first data page.

9. A solid state hard disk controller, comprising:

at least one processor; and the number of the first and second groups,

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the meta-information management method of any one of claims 1-8.

10. The solid state hard disk controller of claim 9, further comprising:

the error correction engine is connected with the memory and the processor and is used for carrying out error correction coding on the effective data when data is written in to generate corresponding check data; or, the error correction unit is used for decoding the error correction unit and correcting the effective data when reading the data;

and the flash memory controller is connected with the error correction engine and the flash memory array and is used for sending flash memory operation commands to the flash memory array.

11. A solid state disk, comprising:

a flash memory array comprising a plurality of wafers, each wafer comprising a plurality of groupings, each grouping comprising a plurality of physical blocks, each physical block comprising a plurality of physical pages;

the solid state hard disk controller of claim 9 or 10.

Technical Field

The present application relates to the field of solid state disk applications, and in particular, to a meta information management method, a solid state disk controller, and a solid state disk.

Background

Solid State Drives (SSD), which are hard disks made of Solid State electronic memory chip arrays, include a control unit and a memory unit (FLASH memory chip or DRAM memory chip), where the size of meta information affects the error correction capability in Solid State storage.

At present, the sizes of the meta information corresponding to each logic block are the same, and in order to improve the error correction capability, the size of the meta information is usually compressed, so that the size of the user data is integrally adjusted to improve the error correction capability.

Based on the above problems, improvements in the prior art are needed.

Disclosure of Invention

The embodiment of the application provides a meta-information management method, a solid state disk controller and a solid state disk, so as to solve the technical problem that the overall service life of the existing solid state disk is short.

In order to solve the above technical problem, an embodiment of the present application provides the following technical solutions:

in a first aspect, an embodiment of the present application provides a meta-information management method, which is applied to a solid state disk, where the solid state disk includes at least one word line, each word line includes at least two data pages, and each data page includes at least one error correction unit, where the error correction unit includes: valid data and check data, the valid data comprising: user data and meta information, the method comprising:

determining a first data page and at least a second data page of at least two data pages in each word line according to an original bit error rate of each data page in each word line, wherein the original bit error rate of the first data page is smaller than that of the second data page;

and moving partial meta information of at least one second data page to a space corresponding to the meta information of the first data page, and using the space corresponding to the moved meta information in the second data page to increase the length of the check data so that the total space size of the meta information corresponding to each word line is unchanged.

In a second aspect, an embodiment of the present application provides a solid state hard disk controller, including:

at least one processor; and the number of the first and second groups,

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of meta-information management as described in the first aspect.

In a third aspect, an embodiment of the present application provides a solid state disk, including:

a flash memory array comprising a plurality of wafers, each wafer comprising a plurality of groupings, each grouping comprising a plurality of physical blocks, each physical block comprising a plurality of physical pages;

the solid state hard disk controller of the second aspect.

In a fourth aspect, the present application further provides a non-transitory computer-readable storage medium storing computer-executable instructions for enabling a solid state disk to perform the meta information management method described above.

The beneficial effects of the embodiment of the application are that: in contrast to the prior art, an embodiment of the present application provides a meta information management method applied to a solid state disk, where the solid state disk includes at least one word line, each word line includes at least two data pages, and each data page includes at least one error correction unit, where the error correction unit includes: valid data and check data, the valid data comprising: user data and meta information, the method comprising: determining a first data page and at least a second data page of at least two data pages in each word line according to an original bit error rate of each data page in each word line, wherein the original bit error rate of the first data page is smaller than that of the second data page; and moving partial meta information of at least one second data page to a space corresponding to the meta information of the first data page, and using the space corresponding to the moved meta information in the second data page to increase the length of the check data so that the total space size of the meta information corresponding to each word line is unchanged. Through adopting different check data length to different data pages, this application can improve solid state disk's whole error correction ability, prolongs solid state disk's whole life-span.

Drawings

One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.

Fig. 1 is a schematic structural diagram of a solid state disk provided in an embodiment of the present application;

fig. 2 is a schematic structural diagram of a solid-state hard disk controller according to an embodiment of the present application;

FIG. 3 is a schematic diagram of RBER distribution of data pages of TLC Flash according to an embodiment of the present application;

FIG. 4 is a schematic diagram of a data page provided by an embodiment of the present application;

FIG. 5 is a schematic diagram of a data page of a TLC Flash provided in an embodiment of the present application;

fig. 6 is a flowchart illustrating a meta information management method according to an embodiment of the present application;

FIG. 7 is a schematic diagram of meta information movement of a data page of TLC Flash provided in an embodiment of the present application;

FIG. 8 is a schematic diagram of an error correction threshold of a data page of TLC Flash according to an embodiment of the present application;

fig. 9 is a schematic flowchart of determining a first adjustment interval and a second adjustment interval according to an embodiment of the present application;

FIG. 10 is a flowchart illustrating issuing of an operation command according to an embodiment of the present application;

fig. 11 is a schematic diagram of another solid state disk provided in an embodiment of the present application;

FIG. 12 is a schematic diagram of an error correction engine generating error correction checking data according to an embodiment of the present application;

fig. 13 is a schematic flowchart of another meta-information management method provided in an embodiment of the present application;

FIG. 14 is a schematic illustration of an aggregation process provided by an embodiment of the present application;

FIG. 15 is a schematic diagram illustrating a storage manner of logical addresses according to an embodiment of the present application;

FIG. 16 is a schematic diagram of a relationship between a logical address and a page or a word line according to an embodiment of the present application;

fig. 17 is a detailed flowchart of step S131 in fig. 13;

FIG. 18 is a flow chart illustrating the storage of meta information provided by an embodiment of the present application;

FIG. 19 is a diagram illustrating a process for meta-information provided by an embodiment of the present application;

FIG. 20 is a schematic diagram of another meta-information processing provided by embodiments of the present application;

fig. 21 is a schematic structural diagram of a firmware system of a solid state hard disk controller according to an embodiment of the present application;

fig. 22 is a schematic structural diagram of a firmware system of another solid state hard disk controller according to an embodiment of the present application;

FIG. 23 is a schematic diagram of a refined structure of the first aggregation processing module in FIG. 22;

FIG. 24 is a schematic diagram of a refinement of the second polymerization processing module of FIG. 22;

fig. 25 is a schematic diagram of an IO chain table according to an embodiment of the present application;

FIG. 26 is a schematic diagram illustrating an operation flow of a subset linked list according to an embodiment of the present application;

fig. 27 is a schematic diagram of a host writing data according to an embodiment of the present application.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

In addition, the technical features mentioned in the embodiments of the present application described below may be combined with each other as long as they do not conflict with each other.

The technical scheme of the application is specifically explained in the following by combining the drawings in the specification.

Referring to fig. 1, fig. 1 is a schematic structural diagram of a solid state disk according to an embodiment of the present disclosure.

As shown in fig. 1, the solid state disk 100 includes a flash memory medium 110 and a solid state disk controller 120 connected to the flash memory medium 110. The solid state disk 100 is in communication connection with the host 200 in a wired or wireless manner, so as to implement data interaction.

The Flash memory medium 110, which is a storage medium of the solid state disk 100 and is also called a Flash memory, a Flash memory or a Flash granule, belongs to one of storage devices, and is a nonvolatile memory, which can store data for a long time without current supply, and the storage characteristics of the Flash memory medium 110 are equivalent to those of a hard disk, so that the Flash memory medium 110 can become a basis of storage media of various portable digital devices.

The solid state hard disk controller 120 includes a data converter 121, a processor 122, a buffer 123, a flash memory controller 124, and an interface 125.

And a data converter 121 respectively connected to the processor 122 and the flash controller 124, wherein the data converter 121 is configured to convert binary data into hexadecimal data and convert the hexadecimal data into binary data. Specifically, when the flash memory controller 124 writes data to the flash memory medium 110, the binary data to be written is converted into hexadecimal data by the data converter 121, and then written into the flash memory medium 110. When the flash controller 124 reads data from the flash medium 110, hexadecimal data stored in the flash medium 110 is converted into binary data by the data converter 121, and then the converted data is read from the binary data page register. The data converter 121 may include a binary data register and a hexadecimal data register. The binary data register may be used to store data converted from hexadecimal to binary, and the hexadecimal data register may be used to store data converted from binary to hexadecimal.

And a processor 122 connected to the data converter 121, the buffer 123, the flash controller 124 and the interface 125, respectively, wherein the processor 122, the data converter 121, the buffer 123, the flash controller 124 and the interface 125 may be connected by a bus or other methods, and the processor is configured to run the nonvolatile software program, instructions and modules stored in the buffer 123, so as to implement any method embodiment of the present application.

The buffer 123 is mainly used for buffering read/write commands sent by the host 200 and read data or write data acquired from the flash memory 110 according to the read/write commands sent by the host 200.

A flash memory controller 124 connected to the flash memory medium 110, the data converter 121, the processor 122 and the buffer 123, for accessing the flash memory medium 110 at the back end and managing various parameters and data I/O of the flash memory medium 110; or, an interface and a protocol for providing access, implementing a corresponding SAS/SATA target protocol end or NVMe protocol end, acquiring an I/O instruction sent by the host 200, decoding, and generating an internal private data result to wait for execution; or, the core processing module is used for taking charge of the FTL (Flash translation layer).

The interface 125 is connected to the host 200, the data converter 121, the processor 122 and the buffer 123, and configured to receive data sent by the host 200, or receive data sent by the processor 122, so as to implement data transmission between the host 200 and the processor 122, where the interface 125 may be a SATA-2 interface, a SATA-3 interface, a SAS interface, a MSATA interface, a PCI-E interface, a NGFF interface, a CFast interface, a SFF-8639 interface, and an m.2 NVME/SATA protocol.

Referring to fig. 2 again, fig. 2 is a schematic structural diagram of a solid state hard disk controller according to an embodiment of the present disclosure; the solid state disk controller belongs to the solid state disk.

As shown in fig. 2, the solid state hard disk controller 120 includes: PCIe interface controller 126, DDR controller 127, NVMe interface controller 128, processor 122, peripheral module 129, datapath module 1210, and flash controller 124.

Specifically, the PCIe interface controller 126 is configured to control a PCIe communication protocol, the DDR controller 127 is configured to control a dynamic random access memory, the NVMe interface controller 128 is configured to control an NVMe communication protocol, the peripheral module 129 is configured to control other related communication protocols, and the data path module 1210 is configured to control a data path, for example: and managing a write cache, wherein the flash memory controller 124 is used for data processing of the flash memory.

It is understood that a Flash memory (NAND Flash) is a non-volatile storage medium, and is characterized in that electrons can be stored in one unit, the number of the stored electrons can be represented as a voltage value, and the voltage value can be divided into a plurality of areas. Only one bit (such Flash is called as SLC) is stored, and 2 states are correspondingly needed; if 2 bits (such Flash is called MLC) are stored, 4 states are correspondingly needed; if 3 bits are stored (such Flash is called as TLC), 8 states are correspondingly needed, and by analogy, the power of 2 is used as the number of the stored bits for calculation, and the power of 2 is correspondingly needed. Generally, N bits in a unit are coded and distributed into N data pages, and externally appear as different data pages, for example: the MLCFlash includes two data pages of LSB and MSB, and the TLCFlash includes three data pages of LSB, CSB and MSB.

Generally, a Flash memory (NAND Flash) error is mainly caused by a change in Cell voltage due to electron leakage, and a distribution area of the Cell changes, so that a read misjudgment is caused, that is, an error bit occurs. Since different cells are in different states, the influence of the change of the distribution area on different data pages is different, and from the external view of the flash memory, the original Bit Error rates (RBER) of different data pages are different.

Referring to fig. 3, fig. 3 is a schematic diagram of RBER distribution of a data page of TLC Flash according to an embodiment of the present application;

as shown in fig. 3, TLC Flash includes three data pages, namely LSB, CSB and MSB, where RBER distributions of each data page are different, and error correction thresholds of different data pages are different under the same Erase-Program Cycle (EPC), that is, error correction capabilities of different data pages are different in order to reach an EPC target value.

In this embodiment, the solid state disk includes at least one word line, and each word line includes at least two data pages, for example: each word line of MLCFlash includes two data pages LSB and MSB, each word line of TLC Flash includes three data pages LSB, CSB and MSB, for example, each word line of QLC Flash includes four data pages, and each word line of XLC Flash includes five data pages.

Referring to fig. 4, fig. 4 is a schematic diagram of a data page according to an embodiment of the present application;

as shown in fig. 4, each data page includes at least one error correction unit, wherein the error correction unit includes: valid data and check data, the valid data comprising: user data and meta information. It is understood that the user data includes logical blocks, and the meta information is information about information for describing a structure, semantics, usage, and the like of the information.

Referring to fig. 5 again, fig. 5 is a schematic diagram of a data page of a TLC Flash according to an embodiment of the present application;

as shown in fig. 5, the spatial size of each meta information of the LSB data page, the CSB data page, and the MSB data page is the same, and the spatial size of the error correction check data is also the same, that is, the spatial size of the meta information of the LSB data page, the CSB data page, and the MSB data page is also the same, and the spatial size of the error correction check data of the LSB data page, the CSB data page, and the MSB data page is also the same.

Thus, it can be seen that the data distributions of the LSB data page, the CSB data page, and the MSB data page are completely identical, the same error correction unit, the same meta information size, and the same error correction check data size.

It can be understood that the error correction unit is composed of valid data and check data, and the check data is used for the error correction algorithm, and the size of the check data directly affects the size of the error correction capability, i.e. the more the check data, the stronger the error correction capability. The effective data is divided into user data and meta information, the meta information is used for firmware algorithm management and at least comprises a logical address corresponding to a logical data block. From the user perspective, both the meta information and the check data are redundant data. Since the determined flash memory type has a determined page size, that is, a determined error correction unit size, the improvement of the error correction capability on this basis depends on the balance between the meta information and the check data.

As shown in fig. 5, the size of the meta-information corresponding to each logical block is the same, and if the error correction capability is to be improved, the size of the meta-information needs to be compressed as a whole, so as to increase the space size of the check data.

At present, the worst page, i.e. the page with the highest original bit error rate, is generally used as a standard, and an error correction capability, i.e. an error correction threshold, is unified, so that the error correction capability of each data page is higher than the error correction threshold, but because different data pages have the same error correction capability, the probability of re-reading a page with a high original bit error rate (weak page) is higher than that of a page with a low original bit error rate (strong page), so that there is a difference in the delay of reading different data pages, and the quality of service (QoS) may be affected to some extent. Moreover, the life cycle of the weak page is often reached first, and the life cycle of the strong page is not consumed, so that the storage space is wasted, and the whole service life of the solid state disk is reduced.

Therefore, in order to prolong the overall service life of the solid state disk, the application provides a meta-information management method, and by adopting meta-information with different sizes for different data pages, the space in the weak page, which is obtained by reducing the meta-information, is used as a check data space, so that the error correction capability of the weak page is improved.

Referring to fig. 6, fig. 6 is a schematic flowchart illustrating a meta information management method according to an embodiment of the present application;

as shown in fig. 6, the meta-information management method is applied to a solid state disk, and is characterized in that the solid state disk includes at least one word line, each word line includes at least two data pages, and each data page includes at least one error correction unit, where the error correction unit includes: valid data and check data, the valid data comprising: user data and meta information, the method comprising:

step S601: determining a first data page and at least a second data page of at least two data pages in each word line according to an original bit error rate of each data page in each word line, wherein the original bit error rate of the first data page is smaller than that of the second data page;

specifically, the original bit error rate of each data page in each word line is obtained, for example: a word line includes a LSB data page, a CSB data page, and a MSB data page, and assuming that an original bit error rate of the LSB data page is minimum, the LSB data page is determined to be a first data page, and the CSB data page and the MSB data page are determined to be a second data page.

It will be appreciated that in practical situations the first data page and the second data page are determined slightly differently depending on the encoding scheme of the grain vendor, i.e. the first data page and the second data page depend on the encoding scheme under the voltage state distribution. For example: the original bit error rate of each data page may be determined by a manufacturer or experimental data to obtain the effect of the strength of the data page, thereby determining the first data page and the second data page.

Step S602: and moving partial meta information of at least one second data page to a space corresponding to the meta information of the first data page, and using the space corresponding to the moved meta information in the second data page to increase the length of the check data so that the total space size of the meta information corresponding to each word line is unchanged.

Specifically, a part of meta information corresponding to one logical block in at least one data page is moved to a space corresponding to the meta information of one logical block corresponding to the first data page, and the space corresponding to the moved meta information in each second data page is used for increasing the check data length, that is, the space corresponding to the error correction check data is increased, so as to improve the error correction capability of the second data page.

Referring to fig. 7, fig. 7 is a schematic diagram illustrating meta information shifting of a data page of TLC Flash according to an embodiment of the present application;

as shown in fig. 7, the LSB data page is determined to be the first data page, and the CSB data page and the MSB data page are both the second data page, and a part of the meta information corresponding to logical block 0 of the CSB data page and the MSB data page is moved to the space of the meta information corresponding to the LSB data page, so that the space of the error correction check data is increased to increase the check data length. It can be understood that, since the space of the meta information of the LSB data page increases, the space of the error correction check data of the LSB data page decreases.

Since part of the meta-information in the second data page is moved into the first data page, it can be obtained by reading the meta-information of the first data page when it is necessary to read the moved meta-information in the second data page.

Further, in order to determine the adjustment ranges of the meta information and the error correction checking data, it is necessary to calculate a difference between original bit error rates of the first data page and the second data page, that is, to set the adjustment ranges of the meta information and the checking data to such an extent that the RBER of the first data page (strong page) is lower than that of the second data page (weak page).

Referring to fig. 8, fig. 8 is a schematic diagram of an error correction threshold of a data page of TLC Flash according to an embodiment of the present application;

as shown in fig. 8, the vertical axis represents the original error bit rate (RBER) of each page, the horizontal axis represents the Erase-Program cycle (EPC), and the error correction strength and the error correction threshold are measured by the original bit error rate (RBER) to determine the error correction strength or the error correction threshold.

That is, from the external overall view of the flash memory, the original error bit number (RBER) of the flash memory is evaluated to determine the reliability of the flash memory, and this value is mainly influenced by an Erase-Program cycle (EPC), or a Program-Erase cycle (PEC), or a Write-Erase cycle (WEC), or an Erase Counter (EC). Obviously, the EPC and the RBER are in positive correlation, and after the EPC target value is increased, the weak page error correction strength, the error correction threshold value and the strong page error correction strength are correspondingly increased, but the weak page error correction strength obviously needs to be higher than the strong page error correction strength to meet the EPC target value.

It can be understood that the overall lifetime of the solid state disk is prolonged by shifting the EPC target value to the right, i.e. increasing the EPC target value, however, the increase of the EPC target value causes the increase of RBER, and therefore, the error correction capability needs to be improved, and needs to be embodied by increasing the check data length.

For a typical flash vendor, its nominal reliability refers to: under the nominal erase count and the error correction capability Requirement (ECC Requirement) of the reader, there is no error in the data read. In short, the master's error correction engine capability must be greater than the error correction threshold.

Since each data page has a different RBER behavior, in other words, the number of original error bits occurring on the LSB data page (strong page) is smaller than those on the MSB data page and the CSB data page (weak page) under the same EPC, the requirement can be satisfied with a relatively weak error correction capability at this time as well. Then the flash lifetime is increased for the whole (increasing the EPC value), and it is clear that the error correction capability needs to be increased for both strong pages and weak pages.

According to the method and the device, different error correction strengths (error correction capabilities) are set for different data pages, and the error correction strengths are strongly correlated with the lengths of the check data under the fixation of an error correction algorithm, namely, the longer the length of the check data is, the higher the error correction capability is. But the data storage distribution is mostly fixed (within one error correction unit, logical blocks containing user data, and metadata of management information, the remainder is space for the check data). Then for the improvement of metadata management, the difference of the space of the check data in the error correction units in different pages can be achieved, thereby realizing the difference of the error correction strength.

In an embodiment of the present application, the moving the partial meta information of the at least one second data page to a space corresponding to the meta information of the first data page includes:

and moving the logical address of the at least one second data page to a space corresponding to the meta-information of the first data page. For example: each page of a general-purpose data page has 32Bytes of space, based on word line as a unit, for example: one word line comprises three data pages, namely a first data page and two second data pages, wherein the first data page has a space of 32Bytes, the second data page has a space of 32Bytes, the logical address LMA of the second data page is 10Bytes, the logical address of the second data page is moved to the space corresponding to the meta-information of the first data page, the space size of the second data page is 32-10-22 Bytes, the space size of the first data page is 32+ 20-42 Bytes, and the total size of the word line is still 32Bytes 3-96 Bytes.

It can be understood that in the solid state disk software algorithm, meta-information is mainly used in two phases: a garbage recycling stage and a power-on table rebuilding stage. The application changes the prior general method for the meta information corresponding to the power-on reestablishment table stage: the management method only using the data page as a unit is changed to using a word line (WordLine) as a unit, and the overall size of the meta-information is kept unchanged, namely the meta-information of the weak page is moved into the meta-information of the strong page. Specifically, the migration content of the meta information may be determined by a specific algorithm.

For example: the meta information of the power-on rebuild table phase typically contains logical addresses (LMA, or LBN, LPA, etc.) for P2L (mapping of physical addresses to logical addresses) relationship establishment. The general type represents that each meta-information contains an LMA (local area network), and the LMA is 8-10 bytes in size generally. The LMA of the second data page (weak page) may be moved, in whole or in part, into the meta-information of the strong page, depending on the size of the available meta-information of the first data page (strong page).

As shown in table 1 below, the LMA of the second data page is moved entirely within the meta-information of the first data page. Since the word line is used as a programming unit, only one time stamp (TimeStamp) is needed. Therefore, the space corresponding to the time stamp in the second data page can also be used to increase the space for error correction check data.

TABLE 1

It can be understood that, for a first data page (strong page) and a second data page (weak page), under the same Erase-Program Cycle (EPC), the Raw Bit Error Rate (Raw Bit Error Rate, RBER) of the first data page is smaller than the Raw Bit Error Rate (Raw Bit Error Rate, RBER) of the second data page, that is, the Error correction strength required for the first data page is smaller than the Error correction strength of the second data page, and therefore, the space of the Error correction check data of the first data page can be reduced to increase the space of the meta information thereof, so that the space of the meta information of the first data page is used for recording the meta information of the second data page, so that the space of the Error correction check data of the second data page is increased to increase the Error correction strength of the second data page.

Specifically, please refer to fig. 9 again, fig. 9 is a schematic flowchart illustrating a process of determining a first adjustment interval and a second adjustment interval according to an embodiment of the present disclosure;

as shown in fig. 9, the process of determining the first adjustment interval and the second adjustment interval includes:

step S901: acquiring data distribution of original bit error rates of a first data page and a second data page, and acquiring a preset first error correction intensity threshold and a preset second error correction intensity threshold;

specifically, referring to fig. 8 again, as shown in fig. 8, the LSB data page is a first data page, and the MSB data page and the CSB data page are both second data pages, an EPC target value is determined by obtaining data distributions of original bit error rates of the first data page and the second data page, and a first error correction strength threshold corresponding to the first data page and a second error correction strength threshold corresponding to the second data page are determined based on the data distributions of the original bit error rates of the first data page and the second data page according to the EPC target value. In an embodiment of the present application, the error correction algorithm includes an LDPC error correction algorithm.

Step S902: determining a first error correction intensity corresponding to the first data page and a second error correction intensity corresponding to the second data page according to the data distribution of the original bit error rates of the first data page and the second data page and the first error correction intensity threshold and the second error correction intensity threshold;

specifically, before the error correction strengths of the first data page and the second data page are not adjusted, the error correction strengths of the first data page and the second data page are both the error correction strength of the general-purpose data page. It is understood that the general data pages are configured with uniform error correction strength without distinguishing between strong and weak pages.

As shown in the following Table 2, assuming that the length of the error correction unit (code) is 4KB, the error correction strength of the general-purpose data page is 240bit/4KB, and the error correction strength threshold of the general-purpose data page is 240bit/4KB, at this time, the error correction strength threshold of the first data page (strong page) is 210bit/4KB, and the error correction strength threshold of the second data page (weak page) is 240bit/4KB, in order to improve the error correction capability of the second data page (weak page), therefore, according to the data distribution of the original bit error rate of the first data page and the second data page and the first error correction strength threshold and the second error correction strength threshold, the first error correction strength corresponding to the first data page and the second error correction strength corresponding to the second data page are determined, the method comprises the following steps:

determining that a first error correction strength corresponding to the first data page is (a first error correction strength threshold + a redundancy value);

and determining that the second error correction strength corresponding to the second data page is (the error correction strength threshold of the general-type data page + (the error correction strength threshold of the general-type data page-the first error correction strength threshold-the redundancy value)/N), wherein N is the number of the second data pages, and N is a positive integer.

For example:

if there is only one second data page, for example: in the MLCFlash, if the LSB data page is a first data page and the MSB data page is a second data page, determining that a first error correction strength corresponding to the first data page is (a first error correction strength threshold), and determining that a second error correction strength corresponding to the second data page is (an error correction strength threshold of the general-purpose data page + a second error correction strength threshold — the first error correction strength threshold);

if there are two second data pages, such as: TLC Flash, wherein the LSB data page is a first data page, and both the CSB data page and the MSB data page are second data pages, and then the first error correction strength corresponding to the first data page is determined to be (the first error correction strength threshold + the redundancy value), and the second error correction strength corresponding to the second data page is determined to be (the error correction strength threshold + of the general data page (the error correction strength threshold of the general data page-the first error correction strength threshold-the redundancy value)/2);

similarly, if there are N second data pages, determining that the first error correction strength corresponding to the first data page is (first error correction strength threshold + redundancy value), and the second error correction strength corresponding to the second data page is (error correction strength threshold of general data page + (error correction strength threshold of general data page-first error correction strength threshold-redundancy value)/N), where N is a positive integer and N is greater than or equal to 2;

in the embodiment of the present application, the redundancy value is a preset value, for example: in table 2 below, taking TLC Flash as an example, two second data pages exist, and assuming that the redundancy value in table 2 below is 10 bits/4 KB, the first error correction strength corresponding to the first data page is determined to be (first error correction strength threshold + redundancy value) ═ 210 bits/4 KB +10 bits/4 KB) ═ 220 bits/4 KB, and the second error correction strength corresponding to the second data page is determined to be (error correction strength threshold of general-purpose data page + (error correction strength threshold of general-purpose data page-first error correction strength threshold-redundancy value)/2) ═ 240 bits/4 KB + (240 bits/4 KB/210 bits/4 KB-10 bits/4 KB)/2) ═ 250 bits/4 KB.

Type of error correction requirement Strength of error correction RBER threshold requirement
First data page (Strong page) 220bit/4KB 210bit/4KB
Second data page (Weak page) 250bit/4KB 240bit/4KB
Universal data page 240bit/4KB 240bit/4KB

TABLE 2

Step S903: and determining a first adjustment interval of the check data length of the first data page and a second adjustment interval of the check data length of the second data page according to a preset error correction strength conversion rule.

Specifically, the error correction strength conversion rule includes: the number of correctable bits is equal to the number of bits of check data/preset coefficient.

Specifically, the preset coefficient is determined by a preset error correction algorithm, for example: the error correction algorithm is an LDPC error correction algorithm, and if the preset coefficient is 16, the error correction strength conversion rule is T-P/16, where T is an error correctable bit number, and P is a Parity bit number (Parity bit number).

As shown in table 2 above, on the premise that the length of the error correction unit is 4KB, the error correctable bit number of the first data page is 240-220-20 bits, and at this time, P is 16T, and the check data bit number P is 16-20-320 bits.

Specifically, the determining a first adjustment interval of the check data length of the first data page and a second adjustment interval of the check data length of the second data page according to a preset error correction strength conversion rule includes:

calculating a first check data bit number corresponding to the first data page, and determining a first adjustment interval;

and calculating the bit number of second check data corresponding to the second data page, and determining a second adjustment interval.

As shown in table 2 above, on the premise that the length of the error correction unit is 4KB, the error correctable bit number of the first data page is 240-220 bits-20 bits, at this time, P is 16T, and then the parity data bit number P is 16-20 bits-320 bits-40 Bytes, that is, the first adjustment interval of the parity data length of the first data page is [0, 40Bytes ]; similarly, the error-correctable bit number of the second data page is 250-240-10 bits, and at this time, P is 16T, and the parity data bit number P is 16-10-160-20 Bytes, that is, the second adjustment interval of the parity data length of the second data page is [0, 20Bytes ].

Specifically, the method further comprises:

and determining the increased check data length of the second data page and the decreased check data length of the first data page according to the first adjustment interval of the check data length of the first data page and the second adjustment interval of the check data length of the second data page.

Specifically, the reduced parity data length of the first data page is determined according to a first adjustment interval of the parity data length of the first data page, where the reduced parity data length of the first data page is located in the first adjustment interval, and preferably, the reduced parity data length of the first data page is a maximum value of the first adjustment interval, for example: if the first adjustment interval is [0, 40Bytes ], the reduced check data length of the first data page is 40 Bytes;

specifically, the parity data length of the added second data page is determined according to a second adjustment interval of the parity data length of the second data page, where the added parity data length of the second data page is located in the second adjustment interval, and preferably, the added parity data length of the second data page is a maximum value of the second adjustment interval, for example: if the first adjustment interval is [0, 20Bytes ], the reduced parity data length of the first data page is 20 Bytes.

Referring to fig. 10 again, fig. 10 is a schematic flowchart illustrating a process of issuing an operation command symbol according to an embodiment of the present application;

as shown in fig. 10, the process of issuing the operation command symbol includes:

step S1001: configuring different check matrixes according to the set check data length;

specifically, according to a first check data length corresponding to the first data page, a first check matrix combination corresponding to the first check data length is determined; determining a second check matrix combination corresponding to the second check data length according to the second check data length corresponding to the second data page;

it can be understood that, after the error correction strengths of the first data page and the second data page are determined, when the check data length is determined, that is, the first check data length corresponding to the first data page and the second check data length corresponding to the second data page, the corresponding check data lengths need to be matched through different matrices and parameters.

In the embodiment of the present application, determining a check matrix combination according to the check data length includes: and selecting a limited check matrix under the check data length to determine check matrix combination. Specifically, according to a first check data length corresponding to the first data page, a first check matrix combination corresponding to the first check data length is determined; and determining a second check matrix combination corresponding to the second check data length according to the second check data length corresponding to the second data page. For example: the check data length space has 32Bytes, the error correction algorithm provides 4 check matrixes, the check data length is respectively required to be 24-34 Bytes, 28-40 Bytes, 34-48 Bytes and 42-54 Bytes, only 2 check matrixes can be selected, the check matrixes are respectively 24-34 Bytes and 28-40 Bytes, and the check matrixes determined at the time are combined into (24-34 Bytes and 28-40 Bytes).

In an embodiment of the present application, the method further includes:

configuring error correction engine parameters according to the set check data length, wherein the error correction engine parameters refer to controlling the configuration of the error correction engine, such as whether a high performance mode is selected, an enhanced mode is selected, whether low power consumption is enabled, and the like.

Step S1002: setting corresponding operation command symbols according to different configured check matrixes;

specifically, according to the first check matrix combination, a corresponding first operation command symbol is determined, so as to determine a first operation command symbol corresponding to a first data page;

and determining a corresponding second operation command symbol according to the second check matrix combination so as to determine a second operation command symbol corresponding to the second data page.

Step S1003: identifying the data page type, issuing a corresponding operation command symbol, and performing data read-write operation;

specifically, after the data page type is identified, a corresponding operation command symbol is issued to perform data read-write operation, wherein the data page type includes a first data page or a second data page, and the operation command symbol includes the first operation command symbol or the second operation command symbol. For example: if the first data page is identified, issuing a first operation command symbol; and if the second data page is identified, issuing a second operation command character. Wherein the data read-write operation comprises a decoding operation and an encoding operation. And after the data page type is identified, selecting the corresponding operation command symbol, and further enabling the error correction engine to generate or identify the corresponding check code when the data is read and written.

Referring to fig. 10, fig. 10 is a schematic diagram of another solid state disk provided in the embodiment of the present application;

as shown in fig. 10, the solid state disk 100 includes: a flash memory array 110 and a solid state disk controller 120, wherein the solid state disk controller 120 includes: a processor 122, a buffer 123, an error correction engine 1211, and a flash memory controller 124, wherein the processor 122 is connected to the buffer 123, the error correction engine 1211, and the flash memory controller 124.

The processor 122 is configured to configure a corresponding error correction engine, and issue a read-write operation;

the buffer 123 is configured to store user data; in the embodiment of the present application, the buffer 123 is preferably a Random Access Memory (RAM).

The flash memory controller 124 is configured to send flash memory operation commands and data.

The error correction engine 1211 is connected to the memory and the processor, and configured to perform error correction coding on valid data when writing data, and generate corresponding check data; or, the error correction unit is used for decoding the error correction unit and correcting the effective data when reading the data;

in this embodiment of the present application, the error correction engine 1211 is an ECC engine (ECCEngine), the error correction engine supports multiple error correction capabilities, for example, supports multiple check matrices under an LDPC error correction algorithm, and can be applied to the same error correction unit (Codeword) to generate different check data, and in this embodiment of the present application, the check data length may be fine-tuned to a bit level by short and Punch techniques to correspond to the different error correction capabilities.

In the embodiment of the present application, please refer to fig. 12 by taking the write data as an example, and fig. 12 is a schematic diagram illustrating an error correction engine generating error correction checking data according to the embodiment of the present application;

as shown in fig. 12, the error correction engine generates different check data according to different check matrices, so when the error correction requirement (check code length) of the strong page and the weak page is determined, that is, the error correction unit (code) is determined, the determined error correction check data length needs to be adjusted by different check matrices and error correction engine parameters to match the required check data length. When error correction coding is carried out, the corresponding check matrix is selected, and error correction check data with the corresponding length can be generated.

In the embodiment of the application, the actual error correction strength of the strong and weak pages can be consistent through the improvement of the meta-information management, and the error correction capability of the weak pages is improved, so that the probability of reading errors of different data pages tends to be consistent, and the reading consistency is ensured (namely, the quality of service (QoS) is improved). In addition, the error correction strength of the weak page is improved to prolong the service life of the weak page, so that the service life of the whole solid state disk can be prolonged.

In an embodiment of the present application, a meta-information management method is provided, and is applied to a solid state disk, where the solid state disk includes at least one word line, each word line includes at least two data pages, and each data page includes at least one error correction unit, where the error correction unit includes: valid data and check data, the valid data comprising: user data and meta information, the method comprising: determining a first data page and at least a second data page of at least two data pages in each word line according to an original bit error rate of each data page in each word line, wherein the original bit error rate of the first data page is smaller than that of the second data page; and moving partial meta information of at least one second data page to a space corresponding to the meta information of the first data page, and using the space corresponding to the moved meta information in the second data page to increase the length of the check data so that the total space size of the meta information corresponding to each word line is unchanged. By adopting different check data lengths for different data pages, the error correction capability of the weak page can be improved, and the whole service life of the solid state disk is prolonged.

At present, with the increasing capacity of the solid state disk, the requirement of the meta-information becomes more and more, and especially the meta-information at least contains one logical address, and the bit field thereof occupies more than four bytes, which not only starts to affect the improvement of a certain error correction capability, but also affects the firmware algorithm, such as the power-on recovery speed.

Since data in the same page or word line is not particularly optimized, and the distribution range of the logical address corresponding to the data is 0 to the maximum, the logical address in the meta information must be stored in the full amount of addresses. The space occupied by the storage of the logical address in the form of the full address is too large, for example, a large-capacity disk needs to occupy more than four bytes, and how to optimize the storage of the logical address in the meta information becomes a problem to be solved urgently.

Based on this, the embodiment of the present application provides a meta information management method, a solid state disk controller and a solid state disk, so as to solve the technical problem that the overall life of the existing solid state disk is short.

Specifically, please refer to fig. 13, fig. 13 is a schematic flowchart illustrating another meta information management method according to an embodiment of the present application;

as shown in fig. 13, the flow of the meta information management method includes:

step S131: allocating the same base address for the logic address of the data in the same data page or the same word line;

specifically, through data aggregation, the logical addresses of data in the same data page or the same word line are made to have the same base address. Specifically, before the allocating the same base address to the logical addresses of the data in the same data page or the same word line, the method further includes:

based on a preset logic address subset rule, performing aggregation processing on data written into a flash memory, and dividing logic addresses of the data written into the flash memory into a plurality of subsets, wherein the logic address subset rule comprises: each subset corresponds to one data page or word line, it is determined that the logical addresses of the data in each subset have the same base address, and each logical address is represented by a combination of base address and offset.

Specifically, the logical address is used as an aggregation standard, and based on a preset logical address subset rule, during data aggregation processing, the data is classified and collected to a corresponding subset according to the logical address of the data, and the condition that the data is written into the same data page or the same word line of the flash memory is met.

Specifically, the aggregating, based on a preset logical address subset rule, of the data written into the flash memory, and dividing the logical address of the data written into the flash memory into a plurality of subsets includes:

assuming that the logical addresses of data written to the flash memory are divided into N subsets, then:

wherein n is a positive integer,i is a non-negative integer, N is the number of subsets, base (N) is the base address of the nth subset, and Mi is the size of the ith subset.

Where Base (1) ═ M0, Base (2) ═ M0+ M1, and Base (3) ═ M0+ M1+ M2, it will be appreciated that depending on the subset size of each subset, the starting position of the Base address of the subset may be determined, for example: if the subset size of subset 0 is 1024 (its address is 0 ~ 1023), then the base address of subset 1 is 1024 start.

Referring to fig. 14 again, fig. 14 is a schematic view of a polymerization process according to an embodiment of the present disclosure;

as shown in fig. 14, before writing data from the Host (Host) or Garbage Collection (GC) to the flash memory, the data is aggregated and divided into N subsets, namely LMA subset 1 to LMA subset N, wherein each subset has the same base address, and the logical address of the data in the subset can be expressed as the base address plus an offset.

Step S132: the logical address of each meta-information within the same page of data or the same word line is stored in the form of a combination of base address and offset.

In the embodiment of the application, because N logical addresses in the same data page or the same word line are stored in the meta information in the form of the base address plus the offset, only one copy of the same base address is stored, thereby reducing the length of the meta information occupied by the logical addresses.

In an embodiment of the present application, the method further includes:

dividing the base address into N sub-base addresses according to the number N of the meta-information in the same data page or the same word line, wherein N is a positive integer;

the storing the logic address of each meta-information in the same data page or the same word line in the form of the combination of the base address and the offset comprises:

and allocating a sub base address to each meta-information in a one-to-one correspondence manner, so that the logic address of each meta-information is stored in the form of a combination of the sub base address and the offset.

As shown in table 3 below, the logical address of the meta-information is stored in a base address type, that is, the meta-information is stored in a combination of the sub-base address and the offset, and compared with a general logical address type, the space for storing the logical address of the meta-information can be saved, and the saved space is used for checking the data length, so that the overall error correction capability of the solid state disk is improved, and the service life of the solid state disk is further prolonged.

TABLE 3

In the embodiment of the application, the logical addresses in the meta-information are stored in a base address plus offset mode, compared with full logical address storage, the N logical addresses in the same page or word line have the same base address, only one base address needs to be stored in total, the base addresses are evenly divided into the N meta-information, and each piece of meta-information has 1/N base addresses and at least one offset, so that the occupied space of the meta-information is reduced, the whole error correction capability of the solid state disk is improved, and the service life of the solid state disk is prolonged.

Referring to fig. 15, fig. 15 is a schematic diagram illustrating a storage manner of a logical address according to an embodiment of the present application;

as shown in fig. 15, meta information corresponding to a logical block in a page or a word line in the prior art stores a logical address LMA, for example: the meta information 0 corresponding to the logical block 0 stores a logical address LMA0, which is directly stored in the meta information in the form of a full logical address;

the way of storing the logic address by the meta information corresponding to the logic block in the invention is as follows: 1/N base addresses (1/NBASE) and OFFSETs (OFFSET), for example: the meta information 0 corresponding to the logical block 0 stores 1/N BASE addresses (1/N BASE) and an OFFSET 0(OFFSET0) corresponding to the logical address LMA 0.

Referring to fig. 16 again, fig. 16 is a schematic diagram illustrating a relationship between a logical address and a page or a word line according to an embodiment of the present disclosure;

as shown in fig. 16, data in the same data page or word line in one physical block (block) has the same logical address subset, but the specific values of the logical address subset are not determined, and conversely, the determined physical address does not belong to the determined logical address subset, and each data page or word line has different logical subsets of data. That is, the data aggregation only needs to satisfy the data amount requirement of the same data page or the same word line, and the logical address subset does not limit the physical address where the data is located.

Referring back to fig. 17, fig. 17 is a detailed flowchart of step S131 in fig. 13;

as shown in fig. 17, the step S131: assigning the same base address to the logical addresses of data in the same data page or the same word line, comprising:

step S1311: acquiring a logic address corresponding to any one logic block in the same data page or the same word line;

step S1312: determining subset information corresponding to the logical address according to the logical address;

it can be understood that, since each logical block in the same data page or the same word line corresponds to a unique subset of logical addresses, the logical address corresponding to any logical block in the same data page or the same word line can be obtained by predetermining a lookup table of logical addresses and logical subsets, and subset information corresponding to the logical address can be determined.

Specifically, the determining, according to the logical address, the subset information corresponding to the logical address includes:

based on a pre-established logic address and a query table of a logic subset, querying the query table according to the logic address, and determining subset information corresponding to the logic address, wherein the subset information includes base address information of the base address.

It will be appreciated that in meta-information management, a look-up table of logical addresses to logical subsets needs to be maintained, so as to enable information of relevant subsets to be obtained using logical addresses, such as base addresses of logical subsets. And when the base address of the logic subset needs to be inquired in the meta information, the base address of the logic subset corresponding to the logic address is obtained through the inquiry table.

Step S1313: and acquiring a base address corresponding to the subset information according to the subset information.

Specifically, a base address is obtained according to the logical address subset information, and the base address is divided into N pieces of meta information of the same page or word line. The subset information contains information such as base addresses, and the base addresses are extracted, divided and stored in the meta information.

It will be appreciated that a logical subset is a base + size, and that the logical address falling within the logical subset is the base + offset of the logical subset. A lookup table of logical addresses and logical subsets is maintained for implementing base address lookup based on logical addresses.

In an embodiment of the present application, the method further includes: allocating the same logic address range for the data in the same data page or the same word line; and according to the logical address range, distributing the same base address for the logical addresses of the data in the same data page or the same word line, and storing the logical address of each meta-information in the same data page or the same word line in a combination form of the base address and the offset.

In this embodiment of the present application, if the meta-information includes at least two logical addresses, the meta-information includes at least two offsets, and each offset corresponds to one logical address one by one.

It can be understood that the management algorithm related to meta-information uses the order rule as an index, that is, the first offset is used as the logical address offset corresponding to the data block, and the second offset is used as the logical address offset corresponding to other data blocks. The main purpose of storing the logical address is that the data and the logical address can be corresponding, so that the relationship between the data and the logical address can be checked on some occasions, in order to quicken the determination of the checking relationship, the logical address of the adjacent data is also placed in the meta-information of the data block, and then when the data block is read, the corresponding relationship between the two data and the logical address can be known by reading one data block, so that the logical address corresponding to a plurality of data can be stored by setting a base address or a sub-base address in the meta-information and combining at least two offsets.

Referring to fig. 18, fig. 18 is a schematic flowchart illustrating a process of storing meta information according to an embodiment of the present application;

as shown in fig. 18, the flow of storing the meta information includes:

step S181: allocating N logical blocks and meta information according to the size of a page or a word line;

specifically, N logic blocks and N meta information are allocated according to the memory size of one data page or one word line, where each logic block corresponds to one meta information, for example: logical block 0 corresponds to meta information 0, … …, and logical block N-1 corresponds to meta information N-1.

Step S182: acquiring a logic address corresponding to any logic block, and inquiring the subset information according to the logic address;

specifically, based on a pre-established lookup table of logical addresses and logical subsets, the lookup table is queried according to the logical addresses, and subset information corresponding to the logical addresses, that is, logical address subset information, is determined, where the subset information includes base address information of the base addresses.

Step S183: acquiring a base address according to the logic address subset information, and segmenting the base address into N pieces of meta information of the same page or word line;

specifically, a base address is obtained according to the logical address subset information, the base address is divided into N sub-base addresses, the sub-base addresses correspond to the meta-information one by one, that is, one sub-base address corresponds to one meta-information one by one, so that the N sub-base addresses correspond to the N meta-information one by one, and each sub-base address is stored in the corresponding meta-information.

Step S184: allocating at least one logic address offset for each meta-information, wherein the logic address offset corresponds to a logic block where the meta-information is located;

specifically, at least one logical address offset is allocated to each meta-information, so that the meta-information includes at least one sub-base + offset.

Referring to fig. 19, fig. 19 is a schematic diagram illustrating a meta information processing method according to an embodiment of the present disclosure;

as shown in fig. 19, after the logical address and the logical block are subjected to meta-information processing, the base address is obtained by querying the subset rule.

Referring to fig. 20, fig. 20 is a schematic diagram illustrating another meta information processing method according to an embodiment of the present disclosure;

as shown in fig. 20, the combination of the logical block data transferred by the previous module and the information of the logical address used as the effective data portion of the error correction unit is processed, and the logical address used as the reference base address and the calculation offset. By obtaining the base address and the number of logical blocks within the page or wordline, the base address value to be put to each meta-information can be calculated, and, depending on the size of the page or wordline, how many logical blocks and meta-information are allocated, i.e., the number of error correction units is determined.

In an embodiment of the present application, by providing a meta-information management method, the meta-information management method is applied to a solid state disk, where the solid state disk includes at least one word line, each word line includes at least two data pages, and each data page includes at least one error correction unit, where the error correction unit includes: a logical block and meta information, the method comprising: allocating the same base address for the logic address of the data in the same data page or the same word line; the logical address of each meta-information within the same page of data or the same word line is stored in the form of a combination of base address and offset. The logic address is stored in a combined mode of the base address and the offset, the space of the logic address occupying the meta-information can be reduced, the length of error correction check data can be increased, the integral error correction capability of the solid state disk can be improved, and the service life of the solid state disk can be prolonged.

Referring to fig. 21 again, fig. 21 is a schematic structural diagram of a firmware system of a solid state hard disk controller according to an embodiment of the present disclosure;

as shown in fig. 21, the solid state disk controller of the solid state disk includes a firmware system, where the firmware system is used to connect the host and the flash memory array to implement the processing of data IO;

specifically, the firmware system 210 of the solid state hard disk controller includes:

a Front-End module 211(Front End, FE) configured to obtain a Host command to generate I O operations, where the Front-End module is further configured to take charge of operations such as communication protocol with a Host (Host), parsing of the Host command, and solid state disk command;

a Data processing module 212(Data Process, DP), connected to the front-end module 211, configured to receive an IO operation sent by the front-end module 211 and Process the IO operation, where the Data processing module 212 is further configured to perform command-level Data processing, such as caching Data;

an algorithm module 213(Flash Translation Layer, FTL), connected to the data processing module 212, and configured to perform mapping processing on the IO operation to determine a delivered Flash memory array;

a Back End module 214(Back End, BE), connected to the algorithm module 213, for receiving the IO operation sent by the algorithm module 213 to perform read/write operation on the flash memory array;

the algorithm module 213(Flash Translation Layer, FTL) sends the IO operation to a Back End module 214(Back End, BE) of the solid state hard disk controller, so that the Back End module 214 of the solid state hard disk controller receives the IO operation sent by the algorithm module 213, and operates the corresponding Flash memory array or Flash memory medium according to the IO operation, that is, completes the operation processing from the data to the Flash memory, where the operation includes a read operation or a write operation.

After acquiring the host command, the front-end module 211 of the solid-state hard disk controller processes the host command to generate an IO operation, and sequentially passes through the data processing module 212, the algorithm module 213, and the back-end module 214 to operate the flash memory array.

Referring to fig. 22 again, fig. 22 is a schematic structural diagram of a firmware system of another solid state hard disk controller according to an embodiment of the present disclosure;

as shown in fig. 22, the firmware system includes:

a front end module 211, wherein the front end module 211 comprises: a command processing module 2111, configured to obtain a host command to generate an IO operation;

the data processing module 212 is connected to the command processing module 2111, and is configured to receive the IO operation sent by the command processing module 2111 and process the IO operation;

the algorithm module 213, connected to the data processing module 212, is configured to receive the IO operation processed by the data processing module 212, and perform mapping processing on the IO operation to determine a delivered flash memory array;

a backend module 214, wherein the backend module 214 comprises: the flash memory processing module 2141 is connected to the algorithm module 213, and is configured to receive the IO operation sent by the algorithm module 213, so as to perform read-write operation on the flash memory array;

wherein, the data processing module 212 includes:

the first aggregation processing module 2121 is connected to the command processing module 2111 and configured to perform an aggregation operation;

wherein the algorithm module 213 comprises:

the second aggregation processing module 2132 is connected to the flash memory processing module 2141 and configured to perform an aggregation operation.

In this embodiment of the application, the data processing module 212 further includes:

a first data caching module 2122 connected to the first aggregation processing module 2121 and configured to cache data;

the algorithm module 213 further includes:

the second data caching module 2133 is connected to the second aggregation processing module 2132 and is configured to cache data;

the meta information management module 2131 is connected to the first aggregation processing module 2121 and the second aggregation processing module 2132, and is configured to manage meta information.

It will be appreciated that in basic logic, there are two ways for data, one being host (host) write and the other being Garbage Collection (GC), i.e. read from and then write to the NAND, both of which require aggregation processing. Host (Host) writes are aggregated at the data processing module 212 and Garbage Collection (GC) writes are aggregated at the algorithm module 213. Meanwhile, the meta information management module 2131 handles information processing after two lane aggregation.

Referring to fig. 23 again, fig. 23 is a schematic diagram illustrating a detailed structure of the first aggregation processing module in fig. 22;

as shown in fig. 23, the first aggregation processing module 2121 includes:

a first rule management module 2101 configured to set an aggregation rule, where the aggregation rule includes: the number of subsets, the size of the base address of the subsets, and the cache data brushing condition;

the cache data brushing condition refers to a data size satisfying the same address subset, wherein the same address subset refers to an address range in which addresses in the command fall in the same set subset. Setting a cache data brushing condition according to specific requirements, for example: if the data aggregation is limited to the data volume of one data page or one word line, the cache data brushing condition is that the data volume of one data page or one word line is used as a threshold value at the moment, and the cache data brushing condition is met if the threshold value is met; if the data aggregation is limited to the data volume size of the N data pages or the N word lines, the cache data brushing condition is that the data volume size of the N data pages or the N word lines is used as a threshold, and if the threshold is met, the cache data brushing condition is met.

Such as: the cache data brushing condition is that the data size of a data page or a word line is used as a threshold value, at this time, the address range of the subset M0 is 0-1023, the write address of CMD0 is LBA 0-3, the write address of CMD1 is LBA 32-39, CMD2 LBA 2048-2063, then CMD0 and CMD1 fall in M0, if the total data amount of CDM0 and CMD1 operation meets the threshold value, then a write operation demand is fully aggregated, the cache data brushing condition is met, and at this time, the cache data can be brushed.

The first identification aggregation module 2102 is configured to identify a logical address, determine a subset corresponding to the logical address, load an IO operation on a corresponding IO chain table, and apply for a cache space for the IO operation;

the first identifying and aggregating module 2102 is configured to complete aggregation of similar logical address operations, specifically, identify a logical address of a host command, and mount the host command after splitting to a corresponding subset IO linked list according to a rule that the logical address in the host command falls in which subset.

Referring to fig. 25 again, fig. 25 is a schematic diagram of an IO chain table according to an embodiment of the present application;

as shown in fig. 25, the command collector (CMD catcher) is responsible for obtaining a command from a previous module, converting the command into an IO operation, performing logical address identification on the input command, and depending on which subset the address falls on, loading the IO operation corresponding to the Command (CMD) on the corresponding IO chain table. The IO scheduler (IO Dispatcher) is responsible for taking out IO operations from an IO linked list, classifying the IO operations and allocating data buffers for operations of a next module, where the next module is an algorithm module or a flash memory processing module, for example: IO operation is transmitted to the algorithm module, data is reserved in the Buffer and is flushed out to the flash memory processing module. Through the operation of the command collector and the IO scheduler, the gathering of the IO operations with the same base address to the same linked list can be completed, namely the gathering task is completed.

The first classification application module 2103 is used for processing the subset data, and when the data size of the subset meets the buffer data brushing condition, the subset data is brushed to the flash memory processing module;

specifically, the first classification application module 2103 is configured to complete processing of operation data of homogeneous logical addresses, for example: processing the subset IO, applying for caching for data writing, and when the data volume of the subset meets the data brushing condition, brushing the IO operation to the next module, for example: an algorithm module;

referring again to fig. 24, fig. 24 is a schematic diagram illustrating a detailed structure of the second polymerization processing module in fig. 22;

as shown in fig. 24, the second polymerization processing module 2132 includes:

a second rule management module 3201, configured to set an aggregation rule, where the aggregation rule includes: the number of subsets, the size of the base address of the subsets, and the cache data brushing condition;

specifically, the processing procedure of the second rule management module 3201 is the same as that of the first rule management module 2101, and reference may be made to the first rule management module 2101, which is not described herein again.

The second identification aggregation module 3202 is configured to identify a logical address, determine a subset corresponding to the logical address, load an IO operation on a corresponding subset linked list, that is, an IO linked list, and apply for a cache space for the IO operation;

specifically, the processing procedure of the second identification and aggregation module 3202 is the same as that of the first identification and aggregation module 2102, and reference may be made to the first identification and aggregation module 2102, which is not described herein again.

The second classification application module 3203 is configured to process the subset data, and when the data amount of the subset meets the buffer data flushing condition, flush the subset data to the flash memory processing module.

Specifically, the second classification application module 3203 is configured to complete processing of operation data of the same type of logical address, for example: processing the subset IO, applying for caching for data writing, and when the data volume of the subset meets the data brushing condition, brushing the IO operation to the next module, for example: a flash memory processing module;

in this embodiment, the first aggregation processing module 2121 and the second aggregation processing module 2132 have the same functional modules, and are both used for aggregation operation in the IO processing process.

Referring to fig. 26 again, fig. 26 is a schematic view illustrating an operation flow of a subset linked list according to an embodiment of the present application;

as shown in fig. 26, the operation flow of the subset linked list includes:

step S261: configuring corresponding parameters according to a preset aggregation rule, and initializing;

specifically, the aggregation rule includes: the method comprises the steps of configuring parameters corresponding to the number of subsets, the size of base addresses of the subsets and cache data brushing conditions, and initializing the parameters to realize a preset aggregation rule.

Step S262: carrying out command check according to the aggregation rule, splitting the command into IO (input/output) and then mounting the IO into corresponding subset linked lists;

specifically, according to a received host command, a logical address is identified, a subset corresponding to the logical address is determined, and an IO operation is loaded on a corresponding IO linked list, that is, a subset linked list.

Step S263: processing the subset linked lists in turn, applying for cache allocation to IO, checking the data quantity of the subset, and brushing out to the next module if the data brushing-out condition is met;

specifically, the subset data of the subset linked list is processed in turn, a cache space is applied for IO operation, and when the data volume of the subset meets a cache data brushing condition, the subset data is brushed to the algorithm module or the flash memory processing module. And the data brushing condition is a cache data brushing condition. Specifically, the cache data flushing condition refers to a data size satisfying a same address subset, where the same address subset refers to an address range where addresses in a command fall in a same set subset. Setting a cache data brushing condition according to specific requirements, for example: if the data aggregation is limited to the data volume of one data page or one word line, the cache data brushing condition is that the data volume of one data page or one word line is used as a threshold value at the moment, and the cache data brushing condition is met if the threshold value is met; if the data aggregation is limited to the data volume size of the N data pages or the N word lines, the cache data brushing condition is that the data volume size of the N data pages or the N word lines is used as a threshold, and if the threshold is met, the cache data brushing condition is met.

Referring to fig. 27 again, fig. 27 is a schematic diagram illustrating a host writing data according to an embodiment of the present disclosure;

as shown in fig. 27, taking the host writing data as an example, when the written data has commands with different logical address ranges, such as x, y, z representing data with different address ranges, assuming that three logical address subsets are provided according to x, y, z, after aggregation, the data of different subsets will be aggregated into different caches, when the cache management satisfies the data flushing condition, the data will be flushed to the flash memory, and then the data will have logical addresses with the same range, and then the addresses of the logical blocks will be stored in the meta information, and can be expressed with the same base address and different offsets. Similarly, in garbage collection processing, except that the host write is changed to garbage collection write, other processing methods are the same.

It will be appreciated that the present application is not concerned with which page of data or which wordline data is written to, the particular page or wordline of data being determined by the particular mapping algorithm.

In an embodiment of the present application, there is provided a solid state hard disk controller, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the meta-information management method as described above. The logic address is stored in a combined mode of the base address and the offset, the space of the logic address occupying the meta-information can be reduced, the length of error correction check data can be increased, the integral error correction capability of the solid state disk can be improved, and the service life of the solid state disk can be prolonged.

Embodiments of the present application further provide a non-volatile computer storage medium, where the computer storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more processors, so that the one or more processors may execute the meta information management method in any of the method embodiments, for example, execute the meta information management method in any of the method embodiments.

The above-described embodiments of the apparatus or device are merely illustrative, wherein the unit modules described as separate parts may or may not be physically separate, and the parts displayed as module units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network module units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.

Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. Based on such understanding, the technical solutions mentioned above may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute the method according to each embodiment or some parts of the embodiments.

36页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种存储器电路及其存储器修补方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!