Decoder, operating method of decoder and memory system including the decoder
阅读说明:本技术 解码器、解码器的操作方法和包括该解码器的存储器系统 (Decoder, operating method of decoder and memory system including the decoder ) 是由 金大成 于 2019-06-17 设计创作,主要内容包括:本发明涉及一种解码器的操作方法,可包括:对目标数据块执行第一子解码操作;对候选块和芯片猎杀块执行第二子解码操作;执行第三子解码操作以确定全局校验节点;根据全局校验节点执行第四子解码操作,以推断并更新目标数据块的局部变量节点和失败数据块的局部变量节点;并且基于更新的局部变量节点的分量,来以设置的次数执行重复操作,该重复操作将第一子解码操作至第四子解码操作重复一次。(The invention relates to a method of operation of a decoder, which may comprise: performing a first sub-decoding operation on the target data block; executing a second sub-decoding operation on the candidate block and the chip hunting block; performing a third sub-decoding operation to determine a global check node; executing a fourth sub-decoding operation according to the global check node to deduce and update the local variable node of the target data block and the local variable node of the failure data block; and performing a repetition operation of repeating the first to fourth sub-decoding operations once a set number of times based on the updated component of the local variable node.)
1. A method of operation of a decoder that performs decoding operations based on blocks of data, the method of operation comprising:
performing a first sub-decoding operation on the target data block;
when the first sub-decoding operation fails, performing a second sub-decoding operation on a candidate data block and a chip hunting block corresponding to the target data block;
when there are one or more data blocks for which the second sub-decoding operation failed, performing a third sub-decoding operation based on the data blocks for which the second sub-decoding operation succeeded, i.e., the local variable nodes of the successful data blocks, to determine global check nodes;
performing a fourth sub-decoding operation according to the global check node to deduce and update a local variable node of the target data block and a data block in which the second sub-decoding operation fails, that is, a local variable node of a failed data block; and is
Performing a repetition operation of repeating the first to fourth sub-decoding operations once a set number of times based on the updated component of the local variable node.
2. The operating method of claim 1, wherein when the data block is a page unit, the data block includes data stored in one page, the candidate data block and the chip hunting block are in the same super block as the target data block, and the chip hunting block includes data generated from the target data block and the candidate data block by an exclusive-or (XOR) operation.
3. The operating method as claimed in claim 1, wherein, when the data block is an Error Correction Code (ECC) block unit, the data block includes data stored in a portion of one page, the candidate data block and the chip hunting block are in the same super block as the target data block but stored in a portion of a different page, and the chip hunting block includes data generated from the target data block and the candidate data block by an exclusive OR (XOR) operation.
4. The operating method of claim 1, wherein when the data block is an Error Correction Code (ECC) block unit, the data block includes data stored in a portion of one page, the candidate data block and the chip-hunting block include data stored in the same page as the target data block, and the chip-hunting block includes data generated from the target data block and the candidate data block through an exclusive OR (XOR) operation.
5. The method of operation of claim 1, wherein the performing a third sub-decoding operation comprises generating a component of an ith check node of the global check nodes by exclusive-oring, i.e., XOR, the components of a jth local variable node of the successful data block, where j and i are the same number.
6. The method of operation of claim 1, wherein said performing a fourth sub-decoding operation comprises using a min-sum algorithm to infer the locally variable node of the failed data block.
7. The method of operation of claim 6, wherein the performing a fourth sub-decoding operation comprises: when the component of a first global check node is "0", the local variable node of the failed data block is inferred by multiplying a log-likelihood ratio, which is a product of signs of LLRs, transmitted to the first global check node according to the first local variable node of the successful data block among the local variable nodes connected to the first global check node, by a minimum value among magnitudes of the LLRs.
8. The method of operation of claim 6, wherein the performing a fourth sub-decoding operation comprises: when the component of a second global check node is "1", the local variable node of the failed data block is inferred by inverting the sign of a value obtained from the log-likelihood ratio, i.e., the product of the signs of LLRs multiplied by the minimum value among the magnitudes of the LLRs, transmitted to the second global check node according to the second local variable node of the successful data block among the local variable nodes connected to the second global check node.
9. The method of operation of claim 1, further comprising: when all of the second sub-decoding operations succeed, the target data block is inferred from the successful data block by an exclusive-OR, XOR, operation.
10. The method of operation of claim 1, further comprising: setting all components of the global check node to '0' when all of the second sub-decoding operations fail.
11. The method of operation of claim 1, further comprising: determining that decoding of the target data block fails when the repetition operation is performed a set number of times.
12. The method of operation of claim 1, further comprising: when the first sub-decoding operation is successful:
determining that decoding of the target data block is successful; and is
And outputting the target data block.
13. A decoder that performs a decoding operation based on a block of data, the decoder comprising:
a first decoder performing a first sub-decoding operation on a target data block and performing a second sub-decoding operation on a candidate data block and a chip-hunting block corresponding to the target data block when the first sub-decoding operation fails;
a global check node component that, when there are one or more data blocks for which the second sub-decoding operation failed, performs a third sub-decoding operation based on a local variable node of a data block for which the second sub-decoding operation succeeded, i.e., a successful data block, to determine a global check node; and
a data combiner for performing a fourth sub-decoding operation according to the global check node to deduce and update a local variable node of the target data block and a local variable node of a data block in which the second sub-decoding fails, i.e., a failed data block,
wherein the decoder performs a repetition operation of repeating the first to fourth sub-decoding operations once a set number of times based on the updated component of the local variable node.
14. The decoder of claim 13, wherein when the data block is a page unit, the data block includes data stored in one page, the candidate data block and the chip kill block are in the same superblock as the target data block, and the chip kill block includes data generated from the target data block and the candidate data block by an exclusive-or (XOR) operation.
15. The decoder of claim 13, wherein when the data block is an Error Correction Code (ECC) block unit, the data block includes data stored in a portion of one page, the candidate data block and the chip kill block are in the same super block but stored in a portion of a different page than the target data block, and the chip kill block includes data generated from the target data block and the candidate data block by an exclusive-OR (XOR) operation.
16. The decoder of claim 13, wherein when the data block is an Error Correction Code (ECC) block unit, the data block includes data stored in a portion of one page, the candidate data block and the chip kill block are data stored in the same page as the target data block, and the chip kill block includes data generated from the target data block and the candidate data block by an exclusive OR (XOR) operation.
17. The decoder of claim 13, further comprising:
a fail data buffer receiving the target data block and the fail data block from the first decoder and storing the received target data block and fail data block; and
receiving, by a data buffer, the successful data block from the first decoder and storing the received successful data block.
18. The decoder of claim 13, further comprising an exclusive-or (XOR) unit that generates a component of an ith check node of the global check nodes by XOR' ing components of a jth local variable node of the successful data block, where j and i are the same number.
19. The decoder of claim 13, wherein the data combiner infers a locally variable node of the failed data block by utilizing a min-sum algorithm.
20. The decoder of claim 19, wherein the data combiner infers the local variable node of the failed data block by multiplying a log likelihood ratio, which is a product of signs of LLRs, transmitted to the first global check node according to the first local variable node of the successful data block among the local variable nodes connected to the first global check node, by a minimum value among magnitudes of the LLRs, when the component of the first global check node is "0".
21. The decoder of claim 19, wherein the data combiner infers the local variable node of the failed data block by inverting a sign of a value obtained from a log likelihood ratio, i.e., a product of signs of LLRs multiplied by a minimum value among magnitudes of the LLRs, transmitted to the second global check node according to the second local variable node of the successful data block among the local variable nodes connected to the second global check node when the component of the second global check node is "1".
22. Decoder according to claim 13, wherein the data combiner infers the target data block from the successful data block by an exclusive or, XOR, operation when all the second sub-decoding operations are successful.
23. The decoder of claim 13, further comprising a decoder input buffer storing the local variable nodes of the successful data block and the failed data block generated by the second sub-decoding operation, and the local variable nodes of the target data block and the failed data block updated by the fourth sub-decoding operation.
24. The decoder of claim 13, wherein the global check node component sets all components of the global check node to "0" when all of the second sub-decoding operations fail.
25. The decoder according to claim 13, wherein the decoder determines that decoding of the target data block fails when the repetition operation is performed the set number of times.
26. The decoder of claim 13, wherein the decoder determines that decoding of the target data block is successful and outputs the target data block when the first sub-decoding operation is successful.
27. A memory system, comprising:
a memory device storing data; and
a controller to read data from the memory device and decode the read data,
wherein the controller includes:
a first decoder performing a first sub-decoding operation on a target data block and performing a second sub-decoding operation on a candidate data block and a chip-hunting block corresponding to the target data block when the first sub-decoding operation fails;
a global check node component that, when there are one or more data blocks for which the second sub-decoding operation failed, performs a third sub-decoding operation based on a local variable node of a data block for which the second sub-decoding operation succeeded, i.e., a successful data block, to determine a global check node; and
a data combiner for performing a fourth sub-decoding operation according to the global check node to deduce and update a local variable node of the target data block and a data block in which the second sub-decoding operation fails, i.e., a local variable node of a failed data block,
wherein the controller performs a repetition operation of repeating the first to fourth sub-decoding operations once a set number of times based on the updated component of the local variable node.
28. The memory system of claim 27 wherein the data combiner infers the local variable node of the failed data block by utilizing a min-sum algorithm.
29. The memory system of claim 27, wherein the data combiner infers the local variable node of the failed data block by multiplying a log likelihood ratio (LLR sign product) transmitted to the first global check node from the first local variable node of the successful data block among the local variable nodes connected to the first global check node by a minimum value among magnitudes of LLRs, when the component of the first global check node is "0".
30. The memory system of claim 27, wherein the data combiner infers the local variable node of the failed data block by inverting a sign of a value obtained from a log likelihood ratio, i.e., a product of signs of LLRs multiplied by a minimum value among magnitudes of the LLRs, transmitted to the second global check node according to the second local variable node of the successful data block among the local variable nodes connected to the second global check node when the component of the second global check node is "1".
31. A decoder, comprising:
the first decoder:
performing a first sub-decoding operation on the target data block, and
when the first sub-decoding operation fails, performing a second sub-decoding operation on a candidate data block and a chip hunting block corresponding to the target data block; and
a second decoder:
when at least one data block, which the second sub-decoding operation fails, is present among the candidate data blocks and the chip hunting block, i.e., a failed data block, a third sub-decoding operation is performed based on a data block including a data block in which the second sub-decoding operation succeeds to generate a global check node, and
performing a fourth sub-decoding operation on the target data block and the failed data block based on the information of the global check node to update a local variable node of the target data block for the first sub-decoding operation.
Technical Field
Various embodiments of the present invention relate to a decoder, a method of operating the decoder, and a memory system including the decoder.
Background
Generally, memory devices are divided into volatile memory devices such as Dynamic Random Access Memory (DRAM) and static ram (sram), and nonvolatile memory devices such as the following: read-only memory (ROM), masked ROM (mrom), programmable ROM (prom), erasable prom (eprom), electrically eprom (eeprom), ferroelectric ram (fram), phase change ram (pram), magnetic ram (mram), resistive ram (rram), and flash memory.
Volatile memory devices lose stored data when power is interrupted, while non-volatile memory devices retain stored data even when power is interrupted. Non-volatile flash memory devices are widely used as storage media in computer systems due to their high programming speed, low power consumption, and large data storage capacity.
In non-volatile memory devices, and in particular in flash memory devices, the data state of each memory cell depends on the number of bits that the memory cell can be programmed. Memory cells that each
For example, when k bits are to be programmed in a memory cell, 2 is formed in the memory cell at any given timekOne of the threshold voltages. Due to slight differences between the electrical characteristics of the memory cells, the threshold voltages of the memory cells programmed for the same data form a threshold voltage distribution. Threshold voltage distributions respectively corresponding to 2 corresponding to k bits of informationkA data value.
However, the voltage window applicable to various threshold voltage distributions is limited. Thus, as the value of k increases, the distance between successive threshold voltage distributions decreases and the likelihood that adjacent threshold voltage distributions may overlap increases. When two adjacent threshold voltage distributions overlap, the read data may include an erroneous bit.
Fig. 1 is a threshold voltage distribution showing a program state and an erase state of a triple-level cell (TLC) nonvolatile memory device.
Fig. 2 is a graph illustrating threshold voltage distributions of a program state and an erase state of a triple-level cell (TLC) nonvolatile memory device due to characteristic degradation.
In a TLC non-volatile memory device, such as a TLC flash memory device, capable of storing 3-bit data in a single memory cell, the memory cell may have 23One of the threshold voltage distributions.
Due to characteristic differences between memory cells, threshold voltages of memory cells programmed for the same data form a threshold voltage distribution. As shown in fig. 1, in the TLC nonvolatile memory device, threshold voltage distributions are formed corresponding to data states including 7 program states "P1" to "P7" and an erase state "E". Fig. 1 shows an ideal case where threshold voltage distributions do not overlap and there is a sufficient read voltage margin between the threshold voltage distributions.
Referring to the flash memory example of fig. 2, the memory cell may experience charge loss by discharging electrons trapped at the floating gate or tunnel oxide film over time. Such charge loss may be accelerated when the tunnel oxide film is deteriorated due to the iterative program and erase operations. The charge loss may result in a decrease in the threshold voltage of the memory cell. For example, as shown in FIG. 2, the threshold voltage distribution may shift to the left due to charge loss.
In addition, program disturb, erase disturb and/or reverse pattern dependency (back pattern dependency) may cause an increase in threshold voltage. As shown in fig. 2, when the characteristics of the memory cell deteriorate, adjacent threshold voltage distributions may overlap.
Once adjacent threshold voltage distributions overlap, the read data may include a large number of errors when a particular read voltage is applied to a selected word line. For example, when the sensing state of the memory cell is on according to the read voltage Vread3 applied to the selected word line, the memory cell is determined to have the second program state "P2". When the sensing state of the memory cell is off according to the read voltage Vread3 applied to the selected word line, the memory cell is determined to have the third program state "P3". However, when adjacent threshold voltage distributions overlap, a memory cell having the third program state "P3" may be erroneously determined to have the second program state "P2". In short, when adjacent threshold voltage distributions overlap as shown in fig. 2, read data may include a large number of errors.
Accordingly, there is a need for a scheme to reliably and quickly read data stored in memory cells of a memory device, particularly in multi-layer memory cells of a highly integrated memory device.
Disclosure of Invention
Various embodiments of the present invention relate to a controller, a memory system, and an operating method thereof, which are capable of reliably and rapidly reading data stored in memory cells of a memory device, such as multi-layered memory cells of a highly integrated memory device.
According to an embodiment of the present invention, an operation method of a decoder performing a decoding operation based on a predetermined data block, the operation may include: performing a first sub-decoding operation on the target data block; when the first sub-decoding operation fails, performing a second sub-decoding operation on the candidate data block and the chip hunting block corresponding to the target data block; when one or more data blocks which fail the second sub-decoding operation exist, executing a third sub-decoding operation based on the data blocks which succeed in the second sub-decoding operation, namely the local variable nodes of the successful data blocks, so as to determine global check nodes; executing a fourth sub-decoding operation according to the global check node to deduce and update a local variable node of the target data block and a data block in which the second sub-decoding operation fails, namely a local variable node of the failed data block; and performing a repetition operation of repeating the first to fourth sub-decoding operations once a set number of times based on the updated component of the local variable node.
According to an embodiment of the present invention, a decoder performing a decoding operation based on a predetermined data block may include: a first decoder adapted to perform a first sub-decoding operation on a target data block and to perform a second sub-decoding operation on a candidate data block and a chip-hunting block corresponding to the target data block when the first sub-decoding operation fails; a global check node component adapted to, when there are one or more data blocks for which the second sub-decoding operation failed, perform a third sub-decoding operation based on the data blocks for which the second sub-decoding operation succeeded, i.e., the local variable nodes of the successful data blocks, to determine a global check node; and a data combiner adapted to perform a fourth sub-decoding operation according to the global check node to infer and update a local variable node of the target data block and a data block in which the second sub-decoding fails, i.e., a local variable node of the failed data block, wherein the decoder performs a repetition operation based on a component of the updated local variable node, the repetition operation repeating the first to fourth sub-decoding operations once.
According to an embodiment of the present invention, a memory system may include: memory means adapted to store data; and a controller adapted to read data from the memory device and decode the read data, wherein the controller includes: a first decoder adapted to perform a first sub-decoding operation on a target data block and to perform a second sub-decoding operation on a candidate data block and a chip-hunting block corresponding to the target data block when the first sub-decoding operation fails; a global check node component adapted to, when there are one or more data blocks for which the second sub-decoding operation failed, perform a third sub-decoding operation based on the data blocks for which the second sub-decoding operation succeeded, i.e., the local variable nodes of the successful data blocks, to determine a global check node; and a data combiner adapted to perform a fourth sub-decoding operation according to the global check node to infer and update a local variable node of the target data block and a data block in which the second sub-decoding operation fails, i.e., a local variable node of the failed data block, wherein the controller performs a repetition operation of repeating the first to fourth sub-decoding operations once, at a set number of times, based on a component of the updated local variable node.
According to an embodiment of the present disclosure, a decoder may include: a first decoder adapted to: performing a first sub-decoding operation on the target data block, and when the first sub-decoding operation fails, performing a second sub-decoding operation on the candidate data block and the chip hunting block corresponding to the target data block; and a second decoder adapted to: when there is at least one data block in which the second sub-decoding operation fails, i.e., a failed data block, among the candidate data blocks and the chip hunting block, a third sub-decoding operation is performed based on the data blocks including the data block in which the second sub-decoding operation succeeds to generate a global check node, and a fourth sub-decoding operation is performed on the target data block and the failed data block based on information of the global check node to update a local variable node of the target data block for the first sub-decoding operation.
Drawings
The above and other features and advantages of the present invention will become more readily apparent to those skilled in the art to which the present invention pertains from the following detailed description, taken in conjunction with the accompanying drawings identified below.
Fig. 1 is a diagram showing threshold voltage distributions illustrating a program state and an erase state of a triple-level cell (TLC) nonvolatile memory device.
Fig. 2 is a diagram illustrating threshold voltage distributions of a program state and an erase state due to characteristic deterioration of a triple-level cell (TLC) nonvolatile memory device.
FIG. 3 is a block diagram illustrating a memory system according to an embodiment of the invention.
Fig. 4A is a detailed block diagram showing the memory system of fig. 3.
Fig. 4B is a circuit diagram showing an exemplary configuration of a memory block employed in the memory system of fig. 4A.
FIG. 5 is a flow chart illustrating operation of a controller, such as the controller employed in the memory system of FIG. 4A, in accordance with an embodiment of the present invention.
Fig. 6A is a Tanner graph illustrating a Low Density Parity Check (LDPC) decoding operation.
Fig. 6B is a diagram illustrating an LDPC code.
Fig. 6C is a schematic diagram showing a syndrome checking process according to the LDPC decoding operation.
Fig. 7A is a diagram illustrating a 2-bit soft-decision read operation according to an embodiment of the present disclosure.
Fig. 7B is a diagram illustrating a 3-bit soft-decision read operation according to an embodiment of the present disclosure.
Fig. 8A is a block diagram illustrating a structure of an encoder according to an embodiment of the present disclosure.
Fig. 8B is a diagram illustrating an encoding method according to an embodiment of the present disclosure.
Fig. 8C is a flow chart illustrating operation of an encoder according to an embodiment of the present disclosure.
Fig. 9A to 9C are diagrams illustrating a method for storing a second codeword according to an embodiment of the present disclosure.
Fig. 10 is a diagram schematically illustrating the structure of a decoder according to an embodiment of the present disclosure.
Fig. 11 is a diagram illustrating a structure of a second decoder according to an embodiment of the present disclosure.
Fig. 12 is a diagram illustrating a structure of global decoding according to an embodiment.
Fig. 13A is a flowchart illustrating a decoding process according to an embodiment of the present disclosure.
Fig. 13B to 13E are diagrams illustrating an example of a decoding process according to an embodiment of the present disclosure.
Detailed Description
Various embodiments will be described in more detail below with reference to the accompanying drawings. This invention may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the invention to those skilled in the art. The scope of the invention will be understood by the claims of the present invention.
It should be noted that the figures are not necessarily to scale and, in some instances, the proportions may be exaggerated to more clearly illustrate various elements of the embodiments.
Furthermore, in the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be practiced without some or all of these specific details. In other instances, well known process structures and/or processes have not been described in detail in order to not unnecessarily obscure the present invention.
It should also be noted that in some cases, it will be apparent to those skilled in the relevant art that elements (also referred to as features) described in connection with one embodiment may be used alone or in combination with other elements of another embodiment unless specifically stated otherwise. Moreover, references throughout this specification to "an embodiment," "another embodiment," and so forth, do not necessarily refer to only one embodiment, and different references to any such phrases do not necessarily refer to the same embodiment.
Various embodiments of the present invention will now be described in detail with reference to the accompanying drawings.
FIG. 3 is a block diagram illustrating
Fig. 4A is a detailed block diagram illustrating the
Fig. 4B is a circuit diagram illustrating an exemplary embodiment of
Fig. 5 is a flow chart illustrating operation of a controller, such as
Referring to fig. 3 to 5, the
The
The
The
The
When the number of erroneous bits exceeds the error correction capability of ECC130, ECC130 may not correct the erroneous bits. In this case, ECC130 may generate an error correction failure signal.
ECC130 may correct errors by coded modulation such as: low Density Parity Check (LDPC) codes, Bose-Chaudhuri-hocquenghem (bch) codes, turbo codes, Reed-Solomon (RS) codes, convolutional codes, Recursive Systematic Codes (RSC), Trellis Coded Modulation (TCM), and Block Coded Modulation (BCM). ECC130 may include any and all circuits, systems, or devices for error correction.
The
The
For another example, the
Referring to fig. 4A, the
The
ECC130 may detect and correct erroneous bits included in data read from
The
According to an embodiment, ECC130 may perform LDPC encoding on raw data to be programmed to
The
The
Referring to fig. 4B, each of the memory blocks 211 may include a plurality of
Fig. 4B exemplarily shows a
Referring back to fig. 4A, the
The
The
The
The read/
During a programming operation, the read/
Referring to fig. 4A and 5, the operation of the
The first LDPC decoding step S510 may include: for reading voltage V according to hard decisionHDThe data read from the memory cells of the
The second LDPC decoding step S530 may include: when the hard decision LDPC decoding on the ith data finally fails in the first LDPC decoding step S510, a soft decision LDPC decoding operation is performed on the ith data. The second LDPC decoding step S530 may include steps S531, S533, and S535.
In step S511 of the hard decision read step, the
In step S513, the ECC130 may perform a hard decision LDPC decoding operation as the first LDPC decoding operation. ECC130 may perform a hard decision LDPC decoding operation on hard decision read data read from
At step S515, the ECC130 may determine whether the hard decision LDPC decoding operation succeeded or failed. That is, in step S515, it may be determined whether an error of the hard decision read data is corrected. For example, the ECC130 may determine whether an error of the hard decision read data is corrected by using the hard decision read data on which the hard decision LDPC decoding is performed in step S513 and the parity check matrix. For example, when the product result of the parity check matrix and the hard decision read data is a zero vector ('0'), it may be determined that the hard decision read data is corrected. On the other hand, when the result of the multiplication of the parity check matrix and the hard decision read data is not a zero vector ('0'), it may be determined that the hard decision read data is not corrected.
When it is determined that the hard decision LDPC decoding operation of step S513 is successful according to the determination result of step S515 (S515, yes), a read success is obtained (step S520) and the ECC130 may end the error correction decoding operation. The hard decision read data on which the hard decision LDPC decoding operation is performed in step S513 may now be error corrected data and may be provided to the outside or used in the
When it is determined that the hard decision LDPC decoding operation of step S513 fails according to the determination result of step S515 (S515, no), the ECC130 may perform the second LDPC decoding step S530.
In step S531 of the soft-decision reading step, the ECC130 may read the voltage V according to the soft-decisionSDData is read from the memory cells of the
In step S533, a soft-decision LDPC decoding operation may be performed as the second LDPC decoding operation. A soft-decision LDPC decoding operation may be performed based on the soft-decision read data.
In step S535, it may be determined whether the soft decision LDPC decoding operation succeeded or failed. That is, in step S535, it may be determined whether an error of the soft-decision read data on which the soft-decision LDPC decoding operation was performed in step S533 is corrected. For example, the
The process of multiplying the parity check matrix and the hard decision read data during the first LDPC decoding step S510 may be the same as the process of multiplying the parity check matrix and the soft decision read data during the second LDPC decoding step S530.
When it is determined that the soft-decision LDPC decoding of step S533 is successful (S535, yes) according to the determination result of step S535, it may be determined at step S520 that the read voltage V according to soft-decision of step S531 isSDThe read operation of (2) is successful and the error correction decoding operation may end. The soft-decision read data on which the soft-decision LDPC decoding operation is performed in step S533 may now be error-corrected data and may be supplied to the outside or used in the
When it is determined that the soft-decision LDPC decoding operation of step S533 fails according to the determination result of step S535 (S535, no), it may be determined that the process of step S531 eventually fails and the error correction decoding operation may be ended at step S540.
Fig. 6A is a Tanner graph showing a Low Density Parity Check (LDPC) decoding operation.
Fig. 6B is a diagram illustrating an LDPC code.
Fig. 6C is a schematic diagram illustrating a syndrome checking process according to the LDPC decoding operation.
Error Correction Codes (ECC) are commonly used in memory systems. Various physical phenomena occurring in the memory device cause noise effects that corrupt the stored information. Error correction coding schemes may be used to protect stored information from generated errors. This is done by encoding the information before it is stored in the memory device. The encoding process transforms the information bit sequence into a codeword by adding redundancy to the information. This redundancy can then be used to recover information from the possibly corrupted codeword through the decoding process.
In an iterative encoding scheme, a code is constructed as a concatenation of several simple constituent codes, and the code is decoded based on an iterative decoding algorithm and by exchanging information between decoders receiving the simple constituent codes. Generally, codes may be defined using a bipartite graph or Tanner graph that describes the interconnections between constituent codes. In this case, the decoding operation can be viewed as an iterative message over (forcing over) the entire graph edge.
The iterative code may comprise a Low Density Parity Check (LDPC) code. The LDPC code is a linear binary block code defined by a sparse parity check matrix H.
Referring to fig. 6A, the LDPC code has a parity check matrix in which the number of logic high levels (i.e., "1") in each row and column is very small. The structure of the LDPC code may be defined by a Tanner graph that includes
The initial value of each of the
The decoding process of the LDPC code can be performed by iterative decoding based on the sum-product algorithm. The decoding method may be provided based on a suboptimal message-passing algorithm such as a "min-sum" algorithm, which is a simplified version of the sum-product algorithm.
Referring to fig. 6B, a Tanner graph of an LDPC code may include: 5 check nodes 610(C1 to C5) representing the parity check equation of the LDPC code, 10 variable nodes 620(V1 to V10) representing the code symbols, and an
Fig. 6C shows a parity check matrix H corresponding to the Tanner graph. The parity check matrix H is similar to the graphical representation of the parity check equation. The parity check matrix H has the same number of logic high values (i.e., 1) in each column. That is, each column of the parity check matrix H has two 1's corresponding to the connection between each of the
The process of decoding the LDPC code is performed by iterating the process of exchanging messages generated and updated in each node between the
For example, an LDPC decoding operation on a codeword may include a number of iterations, after an initial variable node update, each iteration including a check node update, a variable node update, and a syndrome check. After the first iteration, the LDPC decoding operation may end when the result of the syndrome check satisfies a predetermined or set condition. When the result of the syndrome check does not satisfy the condition, further iterations may be performed. Additional iterations may also include variable node updates, check node updates, and syndrome checks. The number of iterations may be limited to a maximum iteration count that may be predetermined. When the result of the syndrome check does not satisfy the condition until the number of iterations reaches the maximum number of iterations, it may be determined that the LDPC decoding operation on the codeword has failed.
Referring to fig. 6C, the syndrome check is to identify a product result "Hv" of the parity check matrix H and a vector "v" obtained by the update of the
Fig. 6C shows the syndrome check process. FIG. 6C exemplarily shows that the result "Hv" is multipliedt"and therefore, fig. 6C shows that the syndrome check does not satisfy the condition and another iteration should be performed.
Fig. 7A is a diagram illustrating a 2-bit soft-decision read operation according to an embodiment of the present disclosure. Fig. 7B is a diagram illustrating a 3-bit soft-decision read operation according to an embodiment of the present disclosure. The soft-decision read operation of fig. 7A and 7B may be performed by the operation of fig. 5.
Referring to FIG. 7A, in the hard decision LDPC decoding step S510 of FIG. 5, when the hard decision read voltage V is assertedHDWhen applied to the memory cells of the
At the soft-decision LDPC decoding step S530, the ECC130 may calculate reliability information (e.g., log-likelihood ratios (LLRs)) of the hard-decision read data by a soft-decision read operation that applies a plurality of soft-decision read voltages V to the memory cellsSD1、VSD2The plurality of soft-decision read voltages VSD1、VSD2With read voltage V based on hard decisionHDIs constant voltage difference.
As shown in fig. 7A, in the 2-bit soft-decision read operation, when the first decision read voltage VSD1When applied to a memory cell, the soft-decision read data value 2-2 is "1000" depending on the on-off state of the memory cell. Similarly, the voltage V is read according to a second soft decisionSD2The second soft decision read data 2-3 is "1110".
Also, the ECC130 may calculate soft-decision data 2-4 (e.g., LLRs) by performing an XNOR operation on the first and second soft-decision data values 2-2 and 2-3. The soft decision data 2-4 may add reliability to the hard decision data 2-1.
For example, when the soft decision data 2-4 is "1", the hard decision data 2-1 may be a first state (e.g., "1") or a second state (e.g., "0") with a strong possibility. On the other hand, when the soft decision data 2-4 is "0", the hard decision data 2-1 may be a first state (e.g., "1") or a second state (e.g., "0") with a weak possibility.
Referring to FIG. 7B, in the hard decision LDPC decoding step S510 of FIG. 5, when the hard decision read voltage V is assertedHDApplied to the memory cells of the
At the soft-decision LDPC decoding step S530, the ECC130 may calculate reliability information (e.g., LLR) of the hard-decision read data by a soft-decision read operation that applies a plurality of soft-decision read voltages V to the memory cellsSD1To VSD6The plurality of soft-decision read voltages VSD1To VSD6With read voltage V based on hard decisionHDIs constant voltage difference.
As shown in fig. 7B, in the 3-bit soft-decision read operation, when the first soft-decision read voltage VSD1And a second soft-decision read voltage VSD2When applied to the memory cell, a first soft decision read data value and a second soft decision read data value may be calculated. The first soft decision data 3-2 may be calculated as "1001" by performing an exclusive or nor (xnor) operation on the first soft decision read data value and the second soft decision read data value.
When the third soft decision reading voltage VSD3To a sixth soft-decision read voltage VSD6When applied to the memory cell, a third to a sixth soft decision read data value may be calculated. By reading the data value V for the third soft decisionSD3To the sixth soft decision reading data value VSD6An exclusive-or nor (xnor) operation may be performed to calculate the second soft decision data 3-3 (e.g., LLR) as "10101". The second soft decision data 3-3 may assign a weight (weight) to the first soft decision data 3-2.
For example, when the second soft decision data 3-3 is "1", the first soft decision data 3-2 may be a first state (e.g., "1") having a very strong possibility. On the other hand, when the second soft decision data 3-3 is "0", the first soft decision data 3-2 may be a first state (e.g., "1") having a strong possibility.
Similarly, when the second soft decision data 3-3 is "1", the first soft decision data 3-2 may be a second state (e.g., "0") having a very weak possibility. On the other hand, when the second soft decision data 3-3 is "0", the first soft decision data 3-2 may be a second state (e.g., "1") having a weak possibility. That is, the second soft decision data 3-3 may add more reliability to the hard decision data 3-1, similar to that described in fig. 7A.
As shown in fig. 6A to 6C, the
The encoding/decoding may be performed based on unit data, for example, data blocks (data chunk) each having a set or predetermined size. In another embodiment, the unit data on which the encoding/decoding is performed may be a page, in which case the encoding/decoding is performed on a page basis. However, these are only examples; the present invention is not limited thereto.
Fig. 8A illustrates a structure of an
Referring to fig. 8A, the
The
Fig. 8B is a diagram illustrating an encoding method according to an embodiment of the present disclosure, and more particularly, a first encoding operation. In an embodiment, the
The
Referring back to fig. 8A, the
The
Fig. 8C is a flow chart illustrating operation of an encoder, such as
Referring to fig. 8C, the
In step S803, the
In step S805, the
In step S807, the
Fig. 9A to 9C are diagrams illustrating a method for storing a second codeword according to an embodiment of the present disclosure. Specifically, a method of storing the second codeword according to the size of the data block will be described. Referring to fig. 8B, a method for storing the first to third data blocks and the chip hunting block will be described.
In fig. 9A, each of the data blocks may correspond to an ECC block unit. Further, in this embodiment, the ECC block unit is equal to "1/4" of one physical page. However, this is merely an example; the present invention is not limited thereto.
In this case, since the size of each of the first to third data blocks and the chip hunting block is equal to "1/4" of one physical page, the data blocks may be stored in parts of a plurality of pages, respectively. For example, a first data block may be stored in a portion of a first page, a second data block may be stored in a portion of a second page, a third data block may be stored in a portion of a third page, and a chip kill block may be stored in a portion of a chip kill page. The chip kill page may refer to a page that stores only chip kill blocks. The first page through the third page and the chip kill page are different pages that exist in the same superblock. As shown in fig. 9A, the result of the XOR operation on the first to third data blocks is equal to the chip kill block. Further, although not shown in the drawings, each of the data blocks may include parity data generated by the second encoding operation.
Fig. 9B is based on the following assumptions: each of the data blocks is equal to the size of one physical page. However, this is merely an example; the present invention is not limited thereto.
In this case, since the size of each of the first to third data blocks and the chip hunting block corresponds to the size of one physical page, the data blocks can be stored in the respective pages. For example, a first data block may be stored in a first page, a second data block may be stored in a second page, a third data block may be stored in a third page, and a chip-kill block may be stored in a chip-kill page. The first page through the third page and the chip kill page are different pages that exist in the same superblock. As shown in fig. 9B, the result of the XOR operation on the first to third data blocks is equal to the chip kill block. Further, although not shown in the drawings, each of the data blocks may include parity data generated by the second encoding operation.
Fig. 9C is based on the following assumptions: each of the data blocks corresponds to an ECC block unit, and the ECC block unit is equal to "1/4" of one physical page, similarly to fig. 9A.
However, the plurality of data blocks of fig. 9C may be stored in one page according to a different manner from the method described with reference to fig. 9A. For example, as shown in fig. 9C, the first to third data blocks and the chip hunting block may be stored in one physical page. The result of the XOR operation on the first to third data blocks is equal to the chip kill block. Further, although not shown in the drawings, each of the data blocks may include parity data generated by the second encoding operation.
An operation of the
Although not shown in the drawings, the CPU120 of fig. 4A may load data corresponding to a read request of the host from the
Fig. 10 is a block diagram schematically illustrating the structure of a decoder, such as the
Referring to fig. 10, the
The
In particular, the
When successful in the first sub-decoding operation, the
On the other hand, when failing in the first sub-decoding operation, the
The
When the second sub-decoding operation on both the candidate data block and the chip-kill block is successful, the
When the second sub-decoding operation on both the candidate data block and the chip hunting block fails, the
On the other hand, when the second sub-decoding operations on one or more data blocks fail, the
The
The structure and operation of the
Fig. 11 illustrates a structure of a second decoder, such as
Referring to fig. 11, the
Global
In particular, information of a successful data block (e.g., a component of a locally variable node corresponding to the successful data block) may be stored in the
The
Specifically, the
The
When the first sub-decoding operation on the target data block is successful, the
The operation of the
Fig. 12 is a diagram illustrating a structure of global decoding according to an embodiment.
When the first sub-decoding operation on the target data block chunk1 fails, the
Based on the results of the first and second sub-decoding operations, a locally variable node corresponding to each data block may be generated. For example, as shown in fig. 12, each of the first through third data chunks chunk 1-3 and the chip hunting chunk may include three local check nodes, six local variable nodes, and edges connecting the local check nodes to the local variable nodes. The
The
When the second sub-decoding operation on one or more data blocks fails, the
The
In particular, the
However, when the second sub-decoding operation fails for all data blocks, all components of the global check node may be set to "0". On the other hand, when the second sub-decoding operation is successful for all data blocks, the global decoding may not be performed.
The
In addition, when the components of the local variable nodes of the existing target data block and the existing failed data block are different from the messages of the local variable nodes of the target data block and the failed data block inferred through the fourth sub-decoding operation, the
After performing the fourth sub-decoding operation, the
For example, when there are two or more failed data blocks in which the second sub-decoding operation fails, the
When the first sub-decoding operation on the target data block is successful through a set number of repeated operations, the
Fig. 13A to 13E are a flowchart and a diagram showing an operation procedure of a decoder, for example, the
Fig. 13A is a flowchart illustrating a decoding process according to an embodiment of the present disclosure, and fig. 13B to 13E are diagrams illustrating specific examples of the decoding process according to an embodiment of the present disclosure.
Referring to fig. 13A, the
In step S1303, the
In step S1305, the
When the first sub-decoding operation fails (yes at step S1305), the
For example, as shown in fig. 13B, the
Referring back to fig. 13A, when there are one or more data blocks for which the second sub-decoding operation fails (yes at step S1309), the
For example, as shown in fig. 13C, the
Specifically, the
Referring back to fig. 13A, in step S1313, the
The
That is, as shown in fig. 13D, the
However, depending on the type of components of the global check node, the
For example, when the component of the global check node is "0", the
[ equation 1]
In
“Catten"denotes a scaling factor for enabling the
"N (i)" represents a local variable node of a group of failed data blocks connected to the ith global check node. That is, referring to fig. 13D, "n (i)" denotes two local variable nodes connected to the ith global check node.
"j ' ∈ n (i) - { j }" denotes a j ' th local variable node excluding a j ' th local variable node corresponding to the failed data block, which is connected to the i-th global check node, among the local variable nodes of the set of failed data blocks. That is, "j'" indicates the j-th local variable node corresponding to the successful data block.
“αj'i"denotes a symbol of LLR transferred from the jth local variable node corresponding to a successful data block to the ith global check node. For example, an LLR may present a symbol corresponding to "+" when the corresponding component is "0". When a component is "1," the LLR may assume a sign corresponding to "-". However, this is merely an example; the present invention is not limited thereto.
“βj'i"denotes the magnitude, i.e., reliability, of the LLR transmitted to the ith global check node according to the jth local variable node corresponding to the successful data block.
Referring to fig. 13B through 13E, the
In particular, the
On the other hand, when the component of the global check node is "1", the
[ equation 2]
Referring back to fig. 13A, in step S1315, the
When the number of times of execution of the global decoding operation has not reached the set number of repetitions (no at step S1315), the number of times of execution of the global decoding operation may be increased by 1 at step S1317, and the
When there is no data block for which the second sub-decoding fails (no at step S1309), the
For example, when there is no failed data block as shown in fig. 13E, the
For another example, as shown in fig. 13E, when it is assumed that the component of the locally variable node of the third data block is updated, the
By the above method, the
While various embodiments have been illustrated and described, it will be apparent to those skilled in the art that various changes and modifications can be made without departing from the spirit and scope of the invention as defined in the following claims.
For example, it has been described that the
Furthermore, the manner in which the global check node is generated is not limited to the binary XOR operation that has been illustrated. Alternatively, the global check nodes may be configured by check nodes having non-binary LDPC codes. In this case, the number of types of check nodes may be determined according to the size of gf (q) defining the corresponding global check node. For example, when using check nodes with binary LDPC codes, the global check nodes can be decoded with two
Accordingly, the scope of the present disclosure is not limited to the above-described embodiments, but may be defined by the appended claims and equivalents thereof.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:半导体存储设备、半导体存储模块及其访问方法