Mobile data storage

文档序号:1800824 发布日期:2021-11-05 浏览:18次 中文

阅读说明:本技术 移动数据存储 (Mobile data storage ) 是由 阿密特·伯曼 阿里尔·道布恰克 伊莱·海姆 叶夫根尼·布莱希曼 于 2021-04-30 设计创作,主要内容包括:提供了移动电子装置、数据编程到存储器装置的方法、从存储器装置读取数据的方法。移动电子装置可包括存储器装置和存储器控制器,存储器控制器包括:纠错码(ECC)编码器,编码数据;约束信道编码器,配置为基于一个或多个约束对ECC编码器的输出编码;强化学习脉冲编程(RLPP)组件,配置为识别用于将数据编程到存储器装置的编程算法;期望最大化(EM)信号处理组件,配置为从存储器装置接收噪声多字线电压向量并利用对数似然比(LLR)值对向量的各个比特分类;约束信道解码器,配置为从EM信号处理组件接收约束向量并生成非约束向量;和ECC解码器,配置为对非约束向量进行解码。机器学习干扰消除组件可基于或独立于来自EM信号处理组件的输入操作。(A mobile electronic device, a method of programming data into a memory device, a method of reading data from a memory device are provided. The mobile electronic device may include a memory device and a memory controller, the memory controller including: an Error Correction Code (ECC) encoder to encode data; a constrained channel encoder configured to encode an output of the ECC encoder based on one or more constraints; a Reinforcement Learning Pulse Programming (RLPP) component configured to identify a programming algorithm for programming data to a memory device; an expectation-maximization (EM) signal processing component configured to receive a noisy multiple word line voltage vector from a memory device and classify individual bits of the vector with log-likelihood ratios (LLRs); a constrained channel decoder configured to receive a constrained vector from the EM signal processing component and generate an unconstrained vector; and an ECC decoder configured to decode the unconstrained vector. The machine learning interference cancellation component may operate based on or independent of input from the EM signal processing component.)

1. A mobile electronic device, comprising:

a memory device;

a memory controller including a processor and an internal memory and configured to operate the memory device, wherein the memory controller further includes:

an error correction code encoder configured to encode data for programming to the memory device;

a constraint channel encoder configured to encode an output of the error correction code encoder based on one or more constraints to facilitate programming to the memory device;

a reinforcement learning pulse programming component configured to identify a programming algorithm for programming the data to the memory device;

a constrained channel decoder configured to receive a constrained vector and generate an unconstrained vector; and

an error correction code decoder configured to decode the unconstrained vector.

2. The mobile electronic device of claim 1, wherein:

the error correction code encoder is configured to encode the data using an S-polarity encoding scheme that combines a Reed-Solomon encoding scheme and a polarity encoding scheme.

3. The mobile electronic device of claim 2, wherein:

the error correction code encoder includes a reduced frame size and a reduced redundancy level configured for mobile architectures.

4. The mobile electronic device of claim 1, wherein:

the restricted channel encoder is configured to identify data from a next wordline of the memory device prior to encoding an output of the error correction code encoder for a current wordline of the memory device.

5. The mobile electronic device of claim 1, wherein:

the reinforcement learning pulse programming component includes a word line agent, a level agent, and a block agent.

6. The mobile electronic device of claim 1, further comprising:

an expectation-maximization signal processing component configured to receive a noisy multiple word line voltage vector from the memory device and classify individual bits of the noisy multiple word line voltage vector using log-likelihood ratios.

7. The mobile electronic device of claim 6, wherein:

configuring the expectation-maximization signal processing component based on a reduced sample size for a mobile architecture.

8. The mobile electronic device of claim 1, further comprising:

a machine-learning jammer continuation elimination component configured to receive a noisy wordline vector from the error correction code decoder and provide a denoised wordline vector to the error correction code decoder.

9. The mobile electronic device of claim 8, wherein:

the machine learning interference continuation cancellation component operates based on input from an expectation maximization signal processing component.

10. The mobile electronic device of claim 1, further comprising:

a neural network decoder configured to receive the word line data vector and the word line voltage vector and generate a recovered data vector.

11. The mobile electronic device of claim 10, wherein:

the neural network decoder includes a reduced number of nodes, wherein the reduced number of nodes are selected for a mobile architecture.

12. The mobile electronic device of claim 1, wherein:

each cell of the memory device includes a 5-bit or 6-bit NAND flash memory cell.

13. A method of programming data to a memory device, the method comprising:

receiving a data block;

encoding the data block based on an error correction code encoding scheme;

encoding the data block based on a constrained coding scheme; and

the encoded data block is programmed to the memory device using reinforcement learning pulse programming.

14. The method of claim 13, wherein:

the error correction code encoding scheme includes an S-polarity encoding scheme that combines a Reed-Solomon encoding scheme and a polarity encoding scheme.

15. The method of claim 13, further comprising:

identifying data from a next word line of the memory device, wherein the constrained encoding scheme is based on the identified data from the next word line.

16. A method of reading data from a memory device, the method comprising:

reading a block of data from a memory device;

processing the data block using expectation-maximization signal processing to classify individual bits of the data block with log-likelihood ratios;

decoding the data block based on a constrained coding scheme; and

decoding the data block based on an error correction code encoding scheme.

17. The method of claim 16, wherein:

the error correction code encoding scheme includes an S-polarity encoding scheme that combines a Reed-Solomon encoding scheme and a polarity encoding scheme.

18. The method of claim 16, wherein:

decoding of the data block based on the error correction code encoding scheme is performed based at least in part on the log-likelihood ratio values.

19. The method of claim 16, further comprising:

determining that decoding based on the error correction code encoding scheme is insufficient; and

performing machine learning interference successive cancellation based on the determination.

20. The method of claim 16, further comprising:

determining that decoding based on the error correction code encoding scheme is insufficient; and

decoding the data block using a neural network decoder.

Technical Field

The following generally relates to data storage and, more particularly, to data storage for mobile devices.

Background

Memory devices are common electronic components for storing data. NAND flash memory devices allow several bits of data to be stored in individual memory cells, providing manufacturing cost and performance improvements. Memory cells that store multiple bits of data may be referred to as multi-level memory cells. Multilevel memory cells divide the threshold voltage range of the memory cell into several voltage states and use the memory cell voltage levels to extract the data values written to the memory cell.

However, storing multiple bits per memory cell may reduce the dynamic voltage range of the various voltage states, making the memory cell more susceptible to noise. Compensating for this noise may require increased computing power, which may hinder performance in a mobile device. Accordingly, there is a need in the art for a reliable, low power multi-cell memory system for use in mobile electronic devices.

Disclosure of Invention

A mobile electronic device for data storage of a mobile device is described. Embodiments of a mobile electronic device may include: a memory device; a memory controller including a processor and an internal memory and configured to operate a memory device, the memory controller comprising: an Error Correction Code (ECC) encoder configured to encode data for programming to a memory device; a constrained channel encoder configured to encode an output of the ECC encoder based on one or more constraints to facilitate programming to the memory device; a Reinforcement Learning Pulse Programming (RLPP) component configured to identify a programming algorithm for programming data to a memory device; a constrained channel decoder configured to receive a constrained vector and generate an unconstrained vector; and an ECC decoder configured to decode the unconstrained vector. Some examples include an expectation-maximization (EM) signal processing component configured to receive a noisy multiple word line voltage vector from a memory device and classify individual bits of the vector with log-likelihood ratio (LLR) values.

A method of programming data to a memory device is described. Embodiments of the method may receive a data block, encode the data block based on an ECC coding scheme, encode the data block based on a constrained coding scheme, and program the encoded data block to a memory device using RLPP.

A method of reading data from a memory device is described. Embodiments of the method may read a data block from a memory device, process the data block using EM signal processing to classify individual bits of the data block with LLR values, decode the data block based on a constrained coding scheme, and decode the data block based on an ECC coding scheme.

Drawings

Features of the inventive concept will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.

FIG. 1 is a block diagram illustrating an implementation of a data processing system including a memory system according to an exemplary embodiment of the present inventive concept.

Fig. 2 is a block diagram illustrating the memory system of fig. 1 according to an exemplary embodiment of the inventive concept.

Fig. 3 is a detailed block diagram of the nonvolatile memory device of fig. 1 according to an exemplary embodiment of the inventive concept.

Fig. 4 is a block diagram of the memory cell array of fig. 2 according to an exemplary embodiment of the inventive concept.

Fig. 5 is a circuit diagram of a memory block of the memory cell array of fig. 4 according to an exemplary embodiment of the inventive concept.

Fig. 6A illustrates an example of a memory controller according to aspects of the present disclosure.

Fig. 6B illustrates another example of a memory controller according to aspects of the present disclosure.

Fig. 7 illustrates an example of an Error Correction Code (ECC) encoding scheme in accordance with aspects of the present disclosure.

FIG. 8 illustrates a hierarchical reinforcement learning scheme in accordance with an embodiment of the present disclosure.

Fig. 9 is a diagram illustrating a structure of a neural network for noise cancellation according to aspects of the present disclosure.

Fig. 10 illustrates an example of a neural network decoder, according to aspects of the present disclosure.

FIG. 11 shows an example of a process of programming data to a memory device, according to aspects of the present disclosure.

FIG. 12 shows an example of a process of reading data from a memory device, according to aspects of the present disclosure.

Fig. 13 illustrates an example of a process of performing interference cancellation according to aspects of the present disclosure.

Fig. 14 illustrates an example of a process of performing neural network decoding, in accordance with aspects of the present disclosure.

Detailed Description

The present disclosure relates to systems and methods for programming and reading data from a memory device. Certain embodiments of the present disclosure are particularly directed to NAND flash memory devices capable of storing 5 or 6 bits of data in each memory cell.

NAND programming is a complex process based on applying voltages to memory cells. However, the cell voltage may be affected by variables such as the current voltage level, pulse power, and intercell interference. Cell voltages may also be affected by inhibit cell interruption, Word Line (WL) coupling, and cell retention. In addition, the results written to the NAND device are random. For example, the data may also be noisy, leading to viewing problems.

Accordingly, the present disclosure describes systems and methods for reliably programming and reading data from 5 or 6 bit memory devices. Particular embodiments relate to memory devices designed for mobile architectures.

Exemplary embodiments of the inventive concept will be described more fully hereinafter with reference to the accompanying drawings. Like reference numerals may refer to like elements throughout the drawings.

It will be understood that the terms "first," "second," "third," and the like, are used herein to distinguish one element from another, and are not limited by these terms. Thus, a "first" element in an exemplary embodiment may be described as a "second" element in other exemplary embodiments.

It should be understood that the description of features or aspects within various exemplary embodiments should generally be understood as applicable to other similar features or aspects in other exemplary embodiments, unless the context clearly dictates otherwise.

As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.

Herein, when one value is described as being about equal to or substantially the same as or equal to another value, it will be understood that the values are equal to each other within measurement error, or if not equal in measurement, as one of ordinary skill in the art would understand that the values are close enough to be functionally equal to each other. For example, the term "about" as used herein includes the stated value and means within an acceptable range of deviation of the specified value as determined by one of ordinary skill in the art in view of the measurement in question and the error associated with measurement of the specified quantity (i.e., the limitations of the measurement system). For example, "about" can mean within one or more standard deviations, as understood by one of ordinary skill in the art. Further, it will be understood that although a parameter may be described herein as having a "about" particular value, according to an exemplary embodiment, the parameter may be exact to the particular value or approximate to the particular value within measurement error, as will be understood by one of ordinary skill in the art.

Exemplary memory System

FIG. 1 is a block diagram illustrating an implementation of a data processing system including a memory system according to an exemplary embodiment of the present inventive concept.

Referring to FIG. 1, data processing system 10 may include a host 100 and a memory system 200. The memory system 200 shown in FIG. 1 may be used in a variety of systems that include data processing functionality. The various systems may be various devices including, for example, mobile devices (e.g., smart phones or tablet computers). However, the various devices are not limited thereto.

Memory system 200 may include various types of memory devices. Herein, exemplary embodiments of the inventive concept will be described as including a memory device as a nonvolatile memory. However, the exemplary embodiments are not limited thereto. For example, the memory system 200 may include memory devices that are volatile memory.

According to an example embodiment, the memory system 200 may include non-volatile memory devices such as, for example, Read Only Memory (ROM), magnetic disks, optical disks, flash memory, and so forth. The flash memory may be a memory that stores data according to a change in a threshold voltage of a Metal Oxide Semiconductor Field Effect Transistor (MOSFET), and may include, for example, NAND and NOR flash memories. The memory system 200 may be implemented using a memory card including a non-volatile memory device, such as an embedded multimedia card (eMMC), a Secure Digital (SD) card, a micro SD card, or a universal flash memory (UFS), or the memory system 200 may be implemented using, for example, an SSD including a non-volatile memory device. Herein, the configuration and operation of the memory system 200 will be described assuming that the memory system 200 is a nonvolatile memory system. However, the memory system 200 is not limited thereto. For example, the host 100 may include a system on chip (SoC) Application Processor (AP) installed on, for example, a mobile device or a Central Processing Unit (CPU) included in a computer system.

As described above, host 100 may include AP 110. The AP 110 may include various Intellectual Property (IP) blocks. For example, the AP 110 may include a memory device driver 111 that controls the memory system 200. Host 100 may communicate with memory system 200 to send commands related to memory operations and receive acknowledgement commands in response to the sent commands. Host 100 may also communicate with memory system 200 regarding tables of information related to memory operations.

Memory system 200 may include, for example, a memory controller 210 and a memory device 220. The memory controller 210 may receive a command related to a memory operation from the host 100, generate an internal command and an internal clock signal using the received command, and provide the internal command and the internal clock signal to the memory device 220. The memory device 220 may store write data in the memory cell array in response to an internal command, or may provide read data to the memory controller 210 in response to an internal command.

Memory device 220 includes an array of memory cells that retain data stored therein even when memory device 220 is not powered on. The memory cell array may include, for example, NAND or NOR flash memory, Magnetoresistive Random Access Memory (MRAM), Resistive Random Access Memory (RRAM), ferroelectric access memory (FRAM), or Phase Change Memory (PCM) as memory cells. For example, when the memory cell array includes a NAND flash memory, the memory cell array may include a plurality of blocks and a plurality of pages. Data can be programmed and read in units of pages, and data can be erased in units of blocks. An example of a memory block included in the memory cell array is shown in fig. 4.

Fig. 2 is a block diagram illustrating the memory system 200 of fig. 1 according to an exemplary embodiment of the inventive concept.

Referring to fig. 2, a memory system 200 includes a memory device 220 and a memory controller 210. Memory controller 210 may also be referred to herein as a controller circuit. The memory device 220 may perform a write operation, a read operation, or an erase operation under the control of the memory controller 210.

Memory controller 210 may control memory device 220 according to a request received from host 100 or an internally specified schedule. The memory controller 210 may include a controller core 211, an internal memory 214, a host interface block 215, and a memory interface block 216. The memory controller 210 may further include a device information storage 217 configured to provide the first device information DI1 to the host interface block 215 and the second device information DI2 to the controller core 211.

The controller core 211 may include a memory control core 212 and a machine learning core 213, and each of these cores may be implemented by one or more processors. Memory control core 212 may control and access memory device 220 according to requests received from host 100 or an internally specified schedule. The memory control core 212 may manage and execute various metadata and code for managing or operating the memory system 200.

The machine learning core 213 may be used to perform training and reasoning for neural networks designed to perform noise cancellation on the memory device 220, as described in more detail below.

The internal memory 214 may be used, for example, as a system memory used by the controller core 211, a cache memory that stores data of the memory device 220, or a buffer memory that temporarily stores data between the host 100 and the memory device 220. The internal memory 214 may store a mapping table MT indicating the relationship between logical addresses assigned to the memory system 200 and physical addresses of the memory devices 220. The internal memory 214 may include, for example, DRAM or SRAM.

In an exemplary embodiment, the neural network (e.g., the neural network described with reference to fig. 9) may be included in a computer program stored in the internal memory 214 of the memory controller 210 or in the memory device 220. A computer program comprising a neural network may be executed by the machine learning core 213 to de-noise data stored in the memory device 220. Thus, according to an example embodiment, the memory system 200 may denoise data stored in the memory device 220 during normal read operations of the memory device 220. That is, after the manufacturing of the memory system 200 is completed, during a normal operation of the memory system 200, particularly, during a normal read operation of the memory system 200 that reads data from the memory device 220, the data stored in the read memory device 220 may be denoised using a neural network locally stored and executed in the memory system 200, and the denoised data may be read out from the memory device 220.

The host interface block 215 may include components (e.g., physical blocks) for communicating with the host 100. The memory interface block 216 may include components (e.g., physical blocks) for communicating with the memory device 220.

Next, the operation of the memory system 200 over time will be described. When power is supplied to the memory system 200, the memory system 200 may perform initialization with the host 100.

The host interface block 215 may provide the first request REQ1 received from the host 100 to the memory control core 212. The first request REQ1 may include a command (e.g., a read command or a write command) and a logical address. The memory control core 212 may convert the first request REQ1 into a second request REQ2 suitable for the memory device 220.

For example, the memory control core 212 may convert the format of the command. The memory control core 212 may refer to the mapping table MT stored in the internal memory 214 to obtain the address information AI. The memory control core 212 may use the address information AI to translate logical addresses into physical addresses for the memory devices 220. The memory control core 212 may provide a second request REQ2 appropriate for the memory device 220 to the memory interface block 216.

The memory interface block 216 may register the second request REQ2 from the memory control core 212 at a queue. The memory interface block 216 may send the first registered request at the queue to the memory device 220 as a third request REQ 3.

When the first request REQ1 is a write request, the host interface block 215 may write data received from the host 100 to the internal memory 214. When the third request REQ3 is a write request, the memory interface block 216 may send data stored in the internal memory 214 to the memory device 220.

When the data is completely written, the memory device 220 may send a third response RESP3 to the memory interface block 216. In response to the third response RESP3, the memory interface block 216 can provide a second response RESP2 to the memory control core 212 indicating that the data was completely written.

After the data is stored in the internal memory 214 or after receiving the second response RESP2, the memory control core 212 may send a first response RESP1 indicating the request completion to the host 100 through the host interface block 215.

When the first request REQ1 is a read request, the read request may be sent to the memory device 220 through the second request REQ2 and the third request REQ 3. The memory interface block 216 may store data received from the memory device 220 in the internal memory 214. When the data is completely sent, the memory device 220 may send a third response RESP3 to the memory interface block 216.

Upon receiving the third response RESP3, the memory interface block 216 may provide a second response RESP2 to the memory control core 212 indicating that the data is fully stored. Upon receiving the second response RESP2, the memory control core 212 may send a first response RESP1 to the host 100 through the host interface block 215.

The host interface block 215 may send data stored in the internal memory 214 to the host 100. In an exemplary embodiment, in the case where data corresponding to the first request REQ1 is stored in the internal memory 214, the transmission of the second request REQ2 and the third request REQ3 may be omitted.

Memory device 220 may also send first serial peripheral interface information SPI1 to memory interface block 216. The memory interface block 216 may send the second serial peripheral interface information SPI2 to the controller core 211.

Fig. 3 is a detailed block diagram of the nonvolatile memory device 220 of fig. 1 according to an exemplary embodiment of the inventive concept. Referring to fig. 3, the memory device 220 may include, for example, a memory cell array 221, a control logic 222, a voltage generation unit 223, a row decoder 224, and a page buffer 225.

The memory cell array 221 may be connected to one or more string selection lines SSL, a plurality of word lines WL, one or more ground selection lines GSL, and a plurality of bit lines BL. The memory cell array 221 may include a plurality of memory cells disposed at intersections between a plurality of word lines WL and a plurality of bit lines BL.

Control logic 222 may receive commands CMD (e.g., internal commands) and addresses ADD from memory controller 210 and control signals CTRL from memory controller 210 for controlling various functional blocks within memory device 220. The control logic 222 may output various control signals for writing data to the memory cell array 221 or reading data from the memory cell array 221 based on the command CMD, the address ADD, and the control signal CTRL. In this manner, control logic 222 may control the overall operation of memory device 220.

Various control signals output by the control logic 222 may be provided to the voltage generation unit 223, the row decoder 224, and the page buffer 225. For example, control logic 222 may provide voltage control signals CTRL _ vol to voltage generation unit 223, row addresses X-ADD to row decoder 224, and column addresses Y-ADD to page buffer 225.

The voltage generation unit 223 may generate various voltages for performing a program operation, a read operation, and an erase operation on the memory cell array 221 based on the voltage control signal CTRL _ vol. For example, the voltage generating unit 223 may generate a first driving voltage VWL for driving the plurality of word lines WL, a second driving voltage VSSL for driving the plurality of string selection lines SSL, and a third driving voltage VGSL for driving the plurality of ground selection lines GSL. In these cases, the first driving voltage VWL may be a program voltage (e.g., a write voltage), a read voltage, an erase voltage, a bypass voltage (pass voltage), or a program verify voltage. In addition, the second driving voltage VSSL may be a string selection voltage (e.g., an on voltage or an off voltage). In addition, the third driving voltage VGSL may be a ground selection voltage (e.g., an on voltage or an off voltage).

The row decoder 224 may be connected to the memory cell array 221 through a plurality of word lines WL, and may activate a portion of the plurality of word lines WL in response to a row address X-ADD received from the control logic 222. For example, in a read operation, row decoder 224 may apply a read voltage to a selected word line and a bypass voltage to unselected word lines.

In a programming operation, row decoder 224 may apply a program voltage to a selected word line and a bypass voltage to unselected word lines. In an example embodiment, the row decoder 224 may apply a program voltage to the selected word line and the additional selected word lines in at least one of a plurality of program cycles.

The page buffer 225 may be connected to the memory cell array 221 through a plurality of bit lines BL. For example, in a read operation, the page buffer 225 may operate as a sense amplifier that outputs data stored in the memory cell array 221. Alternatively, in a program operation, the page buffer 225 may operate as a write driver that writes desired data to the memory cell array 221.

Fig. 4 and 5 illustrate an example of implementing the memory system 200 using a three-dimensional flash memory. Three-dimensional flash memory may include three-dimensional (e.g., vertical) NAND (e.g., VNAND) memory cells. An implementation of the memory cell array 221 including three-dimensional memory cells is described below. The individual memory cells described below may be NAND memory cells.

Fig. 4 is a block diagram of the memory cell array 221 of fig. 2 according to an exemplary embodiment of the inventive concept.

Referring to fig. 4, the memory cell array 221 according to an exemplary embodiment includes a plurality of memory blocks BLK1 through BLKz. Each of the memory blocks BLK1 through BLKz has a three-dimensional structure (e.g., a vertical structure). For example, each of the memory blocks BLK1 through BLKz may include structures extending in the first direction through the third direction. For example, each of the memory blocks BLK1 through BLKz may include a plurality of NAND strings extending in the second direction. For example, multiple NAND strings may be provided in the first to third directions.

Each NAND string is connected to a bit line BL, a string select line SSL, a ground select line GSL, a word line WL, and a common source line CSL. That is, each of the memory blocks BLK1 through BLKz may be connected to a plurality of bit lines BL, a plurality of string selection lines SSL, a plurality of ground selection lines GSL, a plurality of word lines WL, and a common source line CSL. The memory blocks BLK1 to BLKz will be described in more detail below with reference to fig. 5.

Fig. 5 is a circuit diagram of a memory block BLKi according to an exemplary embodiment of the inventive concept. Fig. 5 illustrates an example of one of the memory blocks BLK1 through BLKz in the memory cell array 221 of fig. 4.

The memory block BLKi may include a plurality of cell strings CS11 through CS41 and CS12 through CS 42. The plurality of cell strings CS11 to CS41 and CS12 to CS42 may be arranged in column and row directions to form columns and rows. Each of the cell strings CS11 through CS41 and CS12 through CS42 may include a ground selection transistor GST, memory cells MC1 through MC6, and a string selection transistor SST. The ground selection transistor GST, the memory cells MC1 to MC6, and the string selection transistor SST included in each of the cell strings CS11 to CS41 and CS12 to CS42 may be stacked in a height direction substantially perpendicular to the substrate.

Columns of the plurality of cell strings CS11 through CS41 and CS12 through CS42 may be connected to different string selection lines SSL1 through SSL4, respectively. For example, the string selection transistors SST of the cell strings CS11 and CS12 may be commonly connected to a string selection line SSL 1. The string selection transistors SST of the cell strings CS21 and CS22 may be commonly connected to a string selection line SSL 2. The string selection transistors SST of the cell strings CS31 and CS32 may be commonly connected to a string selection line SSL 3. The string selection transistors SST of the cell strings CS41 and CS42 may be commonly connected to a string selection line SSL 4.

The rows of the plurality of cell strings CS11 to CS41 and CS12 to CS42 may be connected to different bit lines BL1 and BL2, respectively. For example, the string selection transistors SST of the cell strings CS11 to CS41 may be commonly connected to the bit line BL 1. The string selection transistors SST of the cell strings CS12 to CS42 may be commonly connected to the bit line BL 2.

Columns of the plurality of cell strings CS11 to CS41 and CS12 to CS42 may be connected to different ground selection lines GSL1 to GSL4, respectively. For example, the ground selection transistors GST of the cell strings CS11 and CS12 may be commonly connected to the ground selection line GSL 1. The ground selection transistors GST of the cell strings CS21 and CS22 may be commonly connected to the ground selection line GSL 2. The ground selection transistors GST of the cell strings CS31 and CS32 may be commonly connected to the ground selection line GSL 3. The ground selection transistors GST of the cell strings CS41 and CS42 may be commonly connected to the ground selection line GSL 4.

Memory cells disposed at the same height from the substrate (or ground selection transistor GST) may be commonly connected to a single word line, and memory cells disposed at different heights from the substrate may be respectively connected to different word lines WL1 to WL 6. For example, memory cell MC1 may be commonly connected to word line WL 1. Memory cell MC2 may be connected in common to word line WL 2. Memory cell MC3 may be connected in common to word line WL 3. Memory cell MC4 may be connected in common to word line WL 4. Memory cell MC5 may be connected in common to word line WL 5. Memory cell MC6 may be connected in common to word line WL 6. The ground selection transistors GST of the cell strings CS11 through CS41 and CS12 through CS42 may be commonly connected to a common source line CSL.

Memory controller architecture

Fig. 6A illustrates an example of a memory controller 600 according to aspects of the present disclosure. The illustrated example includes a memory controller 600 and a memory device (e.g., NAND flash memory) 660. The memory controller 600 can receive a block of data, program the data to the memory devices, and read the block of data from the memory devices 660.

Memory controller 600 may be an example of memory controller 210 described with reference to fig. 1-3. According to some embodiments, memory controller 600 may include a processor and internal memory and be configured to operate memory device 660. In some examples, the memory controller 600 includes a simplified memory controller 600 architecture configured for reduced power consumption suitable for mobile architectures. In some examples, the individual cells of memory device 660 are 5-bit or 6-bit NAND flash memory cells.

The memory controller 600 may also include an Error Correction Code (ECC) encoder 605, a constrained channel encoder 610, a Reinforcement Learning Pulse Programming (RLPP) component 615, an Expectation Maximization (EM) signal processing component 635, a constrained channel decoder 640, an ECC decoder 645, a machine learning interference continuation cancellation component 650, and a neural network decoder 655.

ECC encoder 605 may be configured to encode data for programming to memory device 660. ECC encoder 605 may receive a block-sized matrix of data as input and output an encoded matrix. In some examples, ECC encoder 605 is configured to encode data using an S-polarity encoding scheme that combines a Reed Solomon (RS) encoding scheme and a polarity encoding scheme. In some examples, ECC encoder 605 includes a reduced frame size and a reduced redundancy level configured for mobile architectures. ECC encoder 605 may be an example of, or include aspects of, a corresponding one or more elements described with reference to fig. 10.

Constrained channel encoder 610 may be configured to encode the output of ECC encoder 605 based on one or more constraints to facilitate programming to memory device 660. In some examples, restricted channel encoder 610 is configured to identify data from a next word line of memory device 660 before encoding the output of ECC encoder 605 for a current word line of memory device 660. The constraint channel encoder 610 may receive the encoded matrix and the next WL read (before programming) as inputs and output a constraint vector (vector).

The constrained channel encoder 610 may encode the data block based on a constrained coding scheme. The constraint channel encoder 610 may also identify data from the next word line of the memory device 660, where the constraint encoding scheme is based on the data from the next word line.

A Reinforcement Learning Pulse Programming (RLPP) component 615 can be configured to identify a programming algorithm for programming data to the memory device 660 and to program the encoded blocks of data to the memory device 660 using RLPP. In some examples, the RLPP component 615 may include a block agent 620, a wordline agent 625, and a level agent 630.

The block agent 620 may receive the WL voltage vector as input and output the block policy (i.e., for the target WL to be programmed). The word line agent 625 may receive the constraint vector and the block policy and output the word line policy (i.e., for the target level to be programmed). Level agent 630 may receive the word line policy and output a level policy (including programming parameters such as an inhibit vector and pulse size). Level agent 630 may also provide an error statistics vector to word line agent 625.

Memory device 660 may receive a level policy and may store a voltage level to represent an information bit. During a read operation (after receiving one or more read requests such as a multi-RD read request), a noisy multi-wordline voltage vector may be provided to EM signal processing component 635.

An expectation-maximization (EM) signal processing component 635 may be configured to receive the noisy multiple word line voltage vector from the memory device 660 and classify individual bits of the vector with log-likelihood ratio (LLR) values. In some examples, EM signal processing component 635 is configured to provide the LLR values to ECC decoder 645. In some examples, EM signal processing component 635 is configured for mobile architectures based on a reduced sample size. The EM signal processing component 635 may receive the noisy multiple word line voltage vector and output a constraint vector along with LLR information for the individual bits.

The constrained channel decoder 640 may be configured to receive the constrained vectors from the EM signal processing component 635 and generate unconstrained vectors. In some embodiments, the constrained channel decoder 640 may decode the data block based on a constrained coding scheme. The constrained channel decoder 640 may receive the constrained vectors and output unconstrained vectors.

The ECC decoder 645 may be configured to decode data including the unconstrained vector. For example, ECC decoder 645 may decode the data block based on an ECC encoding scheme. In some examples, the ECC encoding scheme includes an S-polarity encoding scheme that combines an RS encoding scheme and a polarity encoding scheme. In some examples, decoding of the data block based on the ECC encoding scheme is performed based on the LLR values. The ECC decoder may receive LLR information for each bit and the unconstrained vector and output WL data and the voltage vector. In some cases, ECC decoder 645 receives the neural network decoder results. In some cases, ECC decoder 645 sends the noisy wordline voltage vector to machine-learned disturbance successive cancellation component 650 and receives the denoised wordline voltage vector.

Machine-learning interference continuation elimination component 650 may be configured to receive the noisy wordline vector from ECC decoder 645 and provide the denoised wordline vector to ECC decoder 645. In some examples, machine learning interference succession cancellation component 650 may operate based on input from EM signal processing component 635. However, in other embodiments, the EM signal processing component 635 may operate independently of the EM signal processing component 635.

In some cases, the machine learning interference continuity cancellation component 650 may determine that ECC encoding scheme based decoding is insufficient. The machine learning jammer continuity cancellation component 650 may also perform machine learning jammer continuity cancellation based on a determination that the ECC decoder did not decode the data block correctly. In some examples, the machine learning process on the NAND cells may be based on coupling effects cancellation within the EM.

The neural network decoder 655 may receive the word line data vector and the word line voltage vector and generate a recovered data vector. In some examples, the neural network decoder 655 includes a reduced number of nodes, wherein the reduced number of nodes are selected for the mobile architecture. In some cases, the neural network decoder 655 sends the results to the ECC decoder 645. Finally, the neural network decoder 655 outputs the recovered data vector.

In some cases, the neural network decoder 655 may determine that ECC encoding scheme based decoding is insufficient. The neural network decoder 655 may also use the neural network decoder 655 to decode the data block. The neural network decoder 655 may be an example of, or include aspects of, a corresponding one or more elements described with reference to fig. 10.

Fig. 6B illustrates another example of a memory controller according to aspects of the present disclosure. The depicted example includes memory controller 601 and memory device 660. The memory controller 601 may receive blocks of data, program the data to the memory devices, and read the blocks of data from the memory devices 660. Subcomponents of memory controller 601 may be similar to those of memory controller 600 described above with reference to FIG. 6A, with differences described below. Fig. 6A and 6B are both shown as examples, and embodiments of the present disclosure are not limited thereto.

Specifically, according to fig. 6B, the memory controller 601 may include a reinforcement learning feedback component 613, which may correspond to a Reinforcement Learning Pulse Programming (RLPP) component 615, but provides feedback to the block signal filtering component 611. The block signal filtering component 611 may provide a block programming order to the word line signal filter 612, and the word line signal filter 612 may provide a word line programming order for programming to the memory device 660.

Additionally or alternatively, the machine-learned disturbance succession cancellation component 650 may receive the noisy multi-word line voltage vector directly from the memory device 660 and provide the noisy word line data vector to the constrained channel decoder 640.

Error correction coding

Fig. 7 illustrates an example of an ECC encoding scheme in accordance with aspects of the present disclosure. Data may be encoded using an S-polarity encoding scheme to facilitate programming to a memory device. The S-polarity encoding scheme may combine aspects of a Reed Solomon (RS) encoding scheme and a polarity encoding scheme. In particular, a data block 700 is shown that includes information bits 705, frozen bits 710 (e.g., based on a polarity encoding scheme), a polarity codeword 715, and an RS codeword 720.

Error Correction Coding (ECC) and decoding operations may be performed on the data stream to correct communication errors such as interference or noise. Polar codes are multi-recursively concatenated linear block error correction codes based on short kernel codes, which transform a physical channel into a plurality of virtual outer channels. Virtual channels tend to have either high reliability or low reliability (i.e., they are polarized). Data bits are assigned to the most reliable channels and unreliable channels are "frozen" or set to 0.

The RS code also operates on a block of data treated as a set of finite field elements (called symbols). The RS coding scheme involves adding check symbols to the data. Using the check symbols, the RS code can detect erroneous symbols.

The S-polarity code is based on a polarity code and RS code concatenation. Features of the S-polar code include high performance and easy scalability with overhead and code size adjustment. The S-polarity code may use a multi-stage encoding process. Multiple RS codes may be coded in parallel symbol-by-symbol.

FIG. 7 illustrates that a data block having dimension N J may include dimension N-m0K of (a)1Individual polarity code word 715, dimension N-m0-m1K of (a)2-k1Polarity code word 715 and dimension N-m0–m1–m2J-k of (1)2An embodiment of a polarity codeword 715. Thus, the S polarity code may include N-m of dimension J0-m1Individual RS codeword 720, dimension k2M of2One RS codeword 720 and dimension k1M of0An RS codeword 720.

Constrained coding

In addition to ECC encoding, constrained encoding schemes may also be applied. Constraint coding represents a programming task as a set of parameters and constraints. First, the user enters a set of variables that define the question, where all variables have a set of possible values. Then, constraints are applied. For example, certain values may be determined to be available or unavailable. Then, the solution is output. Next, decision operations and constraint propagation operations are performed. The new constraints are determined by a decision operation. In addition, contradictions are determined by constraint propagation operations, and a new set of constraints are defined and solved until the desired output is calculated.

Thus, a constraint encoding scheme may set constraints that limit the use of certain bit patterns that are more likely to cause interference when reading data from a memory device. Constraints may be applied to bits in a word line, and may also be applied across word lines.

Constraint coding may be inter-bin mitigation in accordance with the present disclosure. The inter-pillar interference refers to interference between adjacent unit pillars. That is, high voltage levels may generate interference for lower level columns. In one example, the first 8 lowest levels are interfered levels (interfering levels) and the last 8 high levels are interfering levels (interfering levels). On average, there are 4 layers of disturbed levels and 4 layers of disturbed levels. In one embodiment, the constraint coding reduces the number of disturbed high and low level bins, thereby providing deterministic processing with 2 disturbed levels and 2 disturbance levels per bin.

Reinforcement learning pulse programming

FIG. 8 illustrates a basic hierarchical reinforcement learning scheme in accordance with embodiments of the present disclosure.

Programming of a NAND flash memory device can be described by the following process. First, each cell in the word line WL has a voltage vcellAnd is andis a vector of all voltages in the WL. After the erase operation is performed,then, each cell has a target voltage vtargetAnd is andis a vector of all target voltages in the WL. Vector quantityAlso referred to as the WL state.

The programming agent may then apply a series of pulses to the WL to change stateThe object of the agent is to changeTo be as close as possible toAfter each of the pulses has been given a pulse,in some cases, it is possible to use,depending on the pulse parameters selected by the agent: (1) pulse power; and (2) disabling the vector. Pulses with selected parameters are referred to as actions. The new WL state depends only on the old WL state and the last action taken. This type of processing is known as Markov Decision Processing (MDP).

Thus, according to some embodiments, a NAND flash memory has a three level hierarchy of blocks, word lines, and cells. The number of blocks varies between chip types, but is typically on the order of thousands per chip. Flash memory devices may then be made up of one or more such chips, and thus the total number of blocks per flash memory device varies considerably. The number of word lines per block also varies. Different chip types may have 64, 128 or 256 word lines, and this may change in the future. The cells are physical parts of the word line, i.e., the word line is essentially a long string of cells.

The levels are conceptual entities in that individual cells are programmed to have a particular voltage level. The cells are then grouped into N groups according to their voltage levels, in which case there are N levels in the word line. The cell voltage determines which level the cell belongs to and therefore what information the cell encodes. Each cell of the same level stores the same information. The number of levels per word line varies according to the writing scheme. The number of levels is a power of 2 to the number of bits per cell written. For example, for 3 bits per cell, there will be 8 levels per word line, but even in the same block, this can vary depending on how many bits are written per cell in a particular word line.

Hierarchical Reinforcement Learning (HRL) is a framework that combines learning of different scales. According to embodiments of the present disclosure, there are three different agents acting on three different scales (block, word line, and cell scales), all combined under the HRL framework. The single action of the higher level agent is the entire episode (episode) of the lower level agent, i.e., the action of the higher level agent defines the parameters for the lower level agent to perform a series of lower level actions that together make up the entire episode. However, each agent in the hierarchy has a decision model to allow the agent to select an action. These models are policy networks. An exemplary policy network is an actor-evaluator model.

The inhibit vector marks all cells that need to be programmed with zeros and those that should not be programmed with ones. The forbidden vector can be very large (about 147K units) and therefore the policy network cannot act as a decision output unit, i.e. individually decide for each unit in the vector whether each unit should be one or zero. Thus, in contrast, embodiments of the present disclosure use different solutions. The method according to an embodiment outputs a voltage threshold, placing a one in the inhibit vector for all cells whose voltage exceeds the selected threshold. The remaining cells remain zero. Thus, in addition to the network power output (which is also a single number and separate from the inhibit vector), the network according to embodiments only has to output one number instead of the 147K number.

Embodiments of the present disclosure attempt to train agents to program different voltage levels to different cells. However, since the state-action space is too large for brute-force reinforcement learning, embodiments of the present disclosure take advantage of the hierarchical structure of NAND flash and break down the task into several subtasks of different levels. In this way, a set of smaller subtasks can be integrated and learning becomes feasible.

Embodiments of the RLPP component may include three different proxies. The level-level agents may effectively write the various voltage levels of the word lines, thereby minimizing the distance from the target level. After the voltage level has been written, control passes to the word line agent. The word line level agents determine which voltage levels to program for a given word line and instruct lower level agents to program those levels while minimizing interference between different levels on the same word line. The number of possible levels is 2n, where n is the number of bits per cell. After the entire word line has been written, control passes to the block agent. The block level agent determines which word line in a block to program while minimizing interference between word lines on the same block. After all word lines have been written, NAND programming terminates.

Referring again to FIG. 8, a method according to an embodiment may begin with a chunk proxy that programs an entire chunk. The block agent is a "block hardening algorithm" 812 and gets the "block status" 813 input from the chip and decides the action, e.g., which word line in the block is currently to be programmed. This action translates into a set of parameters that are passed as a "block command" 814 to the word line agent, which begins to operate under the command. The word line agent is a "word line enhancement algorithm" 822 which receives the block command 814 and gets the "word line status" 823 from the chip and decides on the action itself and passes the action down to the level agent as a "word line command" 824. Level agent is "level enforcement algorithm" 832 and receives word line command 824 and gets "level status" 833 from the chip and decides the action. This action translates into a level command 834 to the flash chip 840. Level command 834 then programs flash chip 840 to completion according to the action, and then returns control to the word line agent indicating that the action of the level agent is complete. After the word line agent is also completed, the word line agent returns control to the block agent indicating that the actions of the word line agent are complete. Block, wordline, and level algorithm parameters reward (reward)811, 821, and 831 in the leftmost column are used during agent training, respectively. Then, in addition to performing the decision, each training agent also uses the reward to update itself and improve.

According to embodiments, the reinforcement learning model may be based on existing algorithms or human expertise. For example, a reinforcement learning model according to embodiments may learn from existing algorithms or experts by simulation. The reinforcement learning model may then be refined at the word line level, and after finding a basic stability policy, the reinforcement learning model may then be adapted to program the block.

Expectation maximization

In some cases, cells having 5 or 6 Bits Per Cell (BPC) may have relatively dense voltage levels (e.g., 156mv for 5BPC, or 105mv for 6 BPC). This can lead to additional errors when reading the voltage level of the cell. That is, additional accuracy may be required and the reading may be more sensitive to interference. Accordingly, the EM signal processing component may be configured to receive a noisy multiple word line voltage vector from the memory device and classify individual bits of the vector using LLR values.

Parameters in a statistical model of a memory device may be estimated using a Maximum Likelihood Estimation (MLE) algorithm, where the model depends on unobserved latent variables. Given a set of data samples x, a model pθ(x, z) the value of the sample can be estimated given some unobserved data or missing values. The MLE algorithm may optimize one or more parameters θ ═ argmaxθpθ(x)。

In some examples, the EM algorithm attempts to find the MLE of edge likelihood by iteratively applying two steps until convergence. First, the desired step may be represented by Q (θ |)(t))=Ez|x,θ(t)[Log(pθ(x,z)=∑zp(z|x,θ(t))Log(pθ(x, z) represents. Second, the maximization step may be found such that θ(t)=argmaxθQ(θ|θ(t)) The parameter θ giving the maximum amount. In some examples, it cannot be guaranteed that a global maximum is provided.

In some cases, the EM algorithm may exploit gaussian mixtures by sampling from a set of normal distributions. The number of cells per level may be known in advance. In some cases, shaping is supported. The cell count may give a process estimate of the mean of the levels used in the initialization. A well-initialized session may result in fewer iterations.

In some cases, the noise erasure level may not be considered gaussian, which results in a sub-optimal gaussian estimate. Thus, the Johnson-SU distribution can be used to estimate the noise-erasure level. The estimate may be removed from the histogram to continue EM.

In some cases, the output from the EM processing may be used to improve subsequent interference cancellation processing (e.g., as described below with reference to fig. 9). For example, the machine learning process for interference cancellation of NAND memory cells may have a coupling effect with EM to further improve read reliability.

Noise cancellation

Fig. 9 is a diagram illustrating a structure of a neural network for noise cancellation according to aspects of the present disclosure. When multiple bits are stored in a single memory cell, programming noise can cause errors in the stored data. For example, in VNAND memory devices, when one word line is being programmed (written), the programming can cause noise to appear on adjacent word lines, which can cause errors when those word lines are later read. Memory cells that are geometrically one above the other (e.g., memory cells located in the same pillar or column) may generate particularly strong noise.

According to some embodiments, the noise cancellation component may extract voltage levels from individual memory cells connected to a String Select Line (SSL), provide the voltage levels of the memory cells as inputs to a neural network, and perform noise cancellation on the SSL by changing the voltage levels of the memory cells from a first voltage level to a second voltage level.

Noise cancellation may be performed based on deep learning using a database. Deep learning is a sub-concept of machine learning, and is a machine learning neural network model related to artificial intelligence. Various neural network architectures are available for deep learning. For example, Artificial Neural Networks (ANN), Deep Neural Networks (DNN), Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and generative confrontation networks (GAN) may be used for deep learning. However, the network architecture that can be used for deep learning is not limited thereto.

In an exemplary embodiment, the performed noise cancellation is performed using a ResNet with several identical consecutive residual blocks. An example of such a ResNet is shown in fig. 9. This configuration allows an iterative noise cancellation process to be performed in which only noise is learned (e.g., in which only the difference between clean data and noisy data is learned). For example, ResNet may learn to continuously denoise data, learning only the noise rise for each iteration.

The input to ResNet may be a single noisy SSL (e.g., one of SSL1, SSL2, SSL3, and SSL 4), and the output may be a denoised string select line SSL. Since the exemplary embodiment utilizes a single SSL each time noise cancellation is performed, only the voltage loss (distance between the noise cell voltage and the clean cell voltage) is measured in the exemplary embodiment. However, since raw Bit Error Rate (BER) is a non-monotonic metric with respect to voltage distance, measuring only voltage loss allows raw BER to be reduced or minimized.

In FIG. 9, the numbers in the respective input layers 901-907, 909-915 and 917 indicate the number of neurons in that layer. The designation "x 1" in the input layer 901 and the output layer 917 designates the shape of these layers. For example, each of input layer 901 and output layer 917 comprises 25 neurons shaped as vectors 25. The width of the input layer is equal to the number of word lines WL connected to a single string select line SSL (e.g., the number of word lines WL in a memory block divided by 4) (e.g., 25 in the 6BPC scheme as shown in fig. 6). The input layer 901 corresponds to noisy SSLs fed into the neural network for denoising.

The types of layers in a neural network are fully connected layers. That is, each neuron in each layer is connected to each neuron in the next layer. The connections between these neurons have corresponding weights, which are learned during training. When each layer is filled with some input from the previous layer, the layer multiplies the input by some weight, then performs a non-linear operation (e.g., a modified linear unit (ReLU) function), and then sends the result as an input to the next layer.

The arrows between layers indicate the type of activation function for the neurons in that layer. In an exemplary embodiment, a modified linear unit (ReLU) function, which is a non-linear function, is used as the activation function between layers 902 and 903, 903 and 904, 904 and 905, 905 and 906, 906 and 907, 909 and 910, 910 and 911, 911 and 912, 912 and 913, 913 and 914, and 914 and 915. The ReLU activation function determines whether a neuron should be activated by computing a weighted sum of the neuron inputs and adding the bias, thus introducing non-linearity in the output of the neuron. The linear functions serve as activation functions between input layer 901 and operation 908, layer 907 and operation 908, operation 908 and layer 909, layer 909 and operation 916, layer 915 and operation 916, and operation 916 and output layer 917. I.e. between these layers and operations, a non-linear activation function is performed.

Operation 908 sums the output of layer 907 with input layer 901 and feeds the output to layer 909. Operation 916 sums the output of layer 915 with layer 909 and feeds the output to output layer 917. The output layer 917 outputs denoised SSL.

The denoised SSL output by the output layer 917 can be used when outputting (e.g., reading) data from the memory device. For example, referring to fig. 6, assume that data stored in a memory cell connected to SSL3 (e.g., one of the memory cells of word lines WL3, WL7, … WL 99) is being read out from the memory device. For example, data may be read out of the memory device when accessed by a user, when passed to a subsequent stage of data processing, and so forth. When data is read out from the memory device, the operations described above with reference to fig. 9 may be performed to denoise the data (e.g., to correct the data) prior to reading out the data from the memory device.

For example, when a request is made to read data from one of the memory cells of wordline WL3, WL7, … WL99, the neural network first denoises SSL3 by changing the voltage level of at least one of the memory cells of wordline WL3, WL7, … WL99 from a first voltage level to a second voltage level, wherein the first voltage level is classified as belonging to a first cluster (of 64 clusters in the 6BPC scheme) and the second voltage level is classified as belonging to a second cluster (of 64 clusters in the 6BPC scheme).

It is noted that the voltage level of such memory cells is not actually changed within the memory device at this time, since writing to the memory cells will again introduce noise for the same reasons as described above. Instead, the memory device outputs the changed (corrected) voltage levels of the memory cells output by the neural network when data from the memory cells are read out from the memory device, rather than reading the actual voltage levels of the memory cells within the memory device at this time. That is, the memory device outputs a cleaner, denoised version of the data generated by the neural network, while the noisy version of the data actually stored in the memory device remains intact and unchanged within the memory device. Therefore, this process can be performed again each time the data is read out from the memory device.

Level-skipping operations may be performed when data is read out of the memory device so that the voltage levels that are made worse by noise cancellation when read out of the memory device do not change (e.g., the actual unchanged voltage levels in the memory device may be read out for these memory cells). A cleaner denoised version of the data (and any data intentionally left unchanged according to the level-skipping operation) may be converted to digital form prior to readout from the memory device.

Since BER can be determined by a level-to-bit Grey code mapping, reducing the voltage error can potentially increase the number of error bits per cell in some cases. Thus, exemplary embodiments may approximate the BER penalty over a range where the BER penalty is still monotonic, otherwise with a constant penalty.

Neural network decoder

Fig. 10 illustrates an example of a neural network decoder 1025, according to aspects of the present disclosure. The illustrated example includes information bits 1000, ECC encoder 1005, modulation 1010, noise source 1015, signal processing 1020, neural network decoder 1025, output processing 1045, and output information bits 1050.

Information bits 1000 may be encoded by ECC encoder 1005 (e.g., using a polarity encoding scheme or an S-polarity encoding scheme). In some cases, a modulation scheme such as Binary Phase Shift Keying (BPSK) may be applied by modulation 1010 before transmitting or programming the data. When receiving (or reading) data, the data may include noise from a noise source 1015. The neural network decoder 1025 can be used to decode data in the presence of noise from a noise source 1015.

The ECC encoder 1005 and the neural network decoder 1025 may be examples of or include aspects of the corresponding elements described with reference to fig. 6. The neural network decoder 1025 may include an input layer 1030, a hidden layer 1035, and an output layer 1040.

The neural network decoder 1025 may operate using Weighted Belief Propagation (WBP) based on a message passing algorithm. Belief propagation may be an edge distribution calculated for the unobserved variables based on the observed variables. The neural network learns the optimal weights for each message. Parameters of the neural network decoder 1025 may be updated based on a loss function (e.g., a loss function based on the cross entropy of the decoded codeword and the original codeword).

According to one embodiment, a feed-forward Artificial Neural Network (ANN) is used as the neural network decoder 1025. An ANN is a hardware or software component that includes a large number of connected nodes (also referred to as artificial neurons) that may loosely correspond to neurons in the human brain. Each connection or edge may send a signal from one node to another (like a physical synapse in the brain). When a node receives a signal, the node may process the signal and then send the processed signal to other connected nodes. In some cases, the signals between nodes include real numbers, and the output of each node may be calculated as a function of the sum of the inputs to each node. Each node and edge may be associated with one or more node weights that determine how the signal is processed and transmitted.

During the training process, these weights may be adjusted to improve the accuracy of the results (i.e., by minimizing a loss function that corresponds in some way to the difference between the current result and the target structure). The weight of the edge may increase or decrease the strength of the signal sent between the nodes. In some cases, a node may have a threshold below which no signal is sent at all. Nodes may also be aggregated into a layer. Different layers may perform different transformations on their inputs. The initial layer may be referred to as an input layer and the final layer may be referred to as an output layer. In some cases, a signal may traverse certain layers multiple times.

According to one example, the penalty function for the neural network may be the cross entropy of the decoded codeword u and the original codeword o. A generalized property may be added using a structured code (e.g., a polar code).

Exemplary method

FIG. 11 shows an example of a process of programming data to a memory device, according to aspects of the present disclosure. In some examples, the operations may be performed by a system including a processor executing a set of codes to control functional elements of a device. Additionally or alternatively, these processes may be performed using dedicated hardware. Generally, these operations may be performed in accordance with the methods and processes described in accordance with aspects of the present disclosure. For example, an operation may consist of various sub-steps or may be performed in conjunction with other operations described herein.

In operation 1100, a system receives a block of data. In some cases, the operations of this step may be performed by or with reference to a memory controller as described with reference to fig. 6.

In operation 1105, the system encodes the data block based on the ECC encoding scheme. In some cases, the operation of this step may be performed by or with reference to an ECC encoder as described with reference to fig. 6 and 10.

In operation 1110, the system encodes the data block based on the constrained coding scheme. In some cases, the operations of this step may be referenced to or performed by a constrained channel encoder as described with reference to fig. 6.

At operation 1115, the system programs the encoded data block to the memory device using RLPP. In some cases, the operations of this step may be performed by or with reference to an RLPP component as described with reference to fig. 6.

FIG. 12 shows an example of a process of reading data from a memory device, according to aspects of the present disclosure. In some examples, the operations may be performed by a system including a processor executing a set of codes to control functional elements of a device. Additionally or alternatively, dedicated hardware may be used to perform these processes. Generally, these operations may be performed in accordance with the methods and processes described in accordance with aspects of the present disclosure. For example, an operation may consist of various sub-steps or may be performed in conjunction with other operations described herein.

In operation 1200, a system reads a block of data from a memory device. In some cases, the operations of this step may be performed by or with reference to a memory controller as described with reference to fig. 6.

In operation 1205, the system processes the data block using EM signal processing to classify individual bits of the data block with LLR values. In some cases, the operations of this step may be performed by or with reference to EM signal processing components as described with reference to fig. 6.

In operation 1210, the system decodes the data block based on the constrained coding scheme. In some cases, the operation of this step may be with reference to or performed by a constrained channel decoder as described with reference to fig. 6.

In operation 1215, the system decodes the data block based on the ECC encoding scheme. In some cases, the operation of this step may be with reference to or performed by an ECC decoder as described with reference to fig. 6.

Fig. 13 illustrates an example of a process of performing interference cancellation according to aspects of the present disclosure. In some examples, the operations may be performed by a system including a processor executing a set of codes to control functional elements of a device. Additionally or alternatively, dedicated hardware may be used to perform these processes. Generally, these operations may be performed in accordance with the methods and processes described in accordance with aspects of the present disclosure. For example, an operation may consist of various sub-steps or may be performed in conjunction with other operations described herein.

In operation 1300, the system reads a block of data from a memory device. In some cases, the operations of this step may be performed by or with reference to a memory controller as described with reference to fig. 6.

In operation 1305, the system processes the data block using EM signal processing to classify individual bits of the data block with LLR values. In some cases, the operations of this step may be performed by or with reference to EM signal processing components as described with reference to fig. 6.

In operation 1310, the system decodes the data block based on the constrained coding scheme. In some cases, the operation of this step may be with reference to or performed by a constrained channel decoder as described with reference to fig. 6.

In operation 1315, the system decodes the data block based on the ECC encoding scheme. In some cases, the operation of this step may be with reference to or performed by an ECC decoder as described with reference to fig. 6.

In operation 1320, the system determines that decoding based on the ECC encoding scheme is insufficient. In some cases, the operations of this step may be referenced to or performed by a machine learning interference continuation cancellation component as described with reference to fig. 6.

In operation 1325, the system performs machine learning interference successive cancellation based on the determination. In some cases, the operations of this step may be referenced to or performed by a machine learning interference continuation cancellation component as described with reference to fig. 6.

Fig. 14 illustrates an example of a process of performing neural network decoding, in accordance with aspects of the present disclosure. In some examples, the operations may be performed by a system including a processor executing a set of codes to control functional elements of a device. Additionally or alternatively, dedicated hardware may be used to perform these processes. Generally, these operations may be performed in accordance with the methods and processes described in accordance with aspects of the present disclosure. For example, an operation may consist of various sub-steps or may be performed in conjunction with other operations described herein.

In operation 1400, the system reads a block of data from the memory device. In some cases, the operations of this step may be performed by or with reference to a memory controller as described with reference to fig. 6.

In operation 1405, the system processes the data block using EM signal processing to classify individual bits of the data block with LLR values. In some cases, the operations of this step may be performed by or with reference to EM signal processing components as described with reference to fig. 6.

In operation 1410, the system decodes the data block based on the constrained coding scheme. In some cases, the operation of this step may be with reference to or performed by a constrained channel decoder as described with reference to fig. 6.

In operation 1415, the system decodes the data block based on the ECC encoding scheme. In some cases, the operation of this step may be with reference to or performed by an ECC decoder as described with reference to fig. 6.

In operation 1420, the system determines that decoding based on the ECC encoding scheme is insufficient. In some cases, the operation of this step may be with reference to or performed by a neural network decoder as described with reference to fig. 6 and 10.

In operation 1425, the system decodes the data block using a neural network decoder. In some cases, the operation of this step may be with reference to or performed by a neural network decoder as described with reference to fig. 6 and 10.

Accordingly, the present disclosure includes the following embodiments.

A mobile electronic device for data storage of a mobile device is described. Embodiments of a mobile electronic device may include: a memory device; a memory controller including a processor and an internal memory and configured to operate the memory device; an Error Correction Code (ECC) encoder configured to encode data for programming to a memory device; a constrained channel encoder configured to encode an output of the ECC encoder based on one or more constraints to facilitate programming to the memory device; a Reinforcement Learning Pulse Programming (RLPP) component configured to identify a programming algorithm for programming data to a memory device; an expectation-maximization (EM) signal processing component configured to receive a noisy multiple word line voltage vector from a memory device and classify individual bits of the vector with log-likelihood ratio (LLR) values; a constrained channel decoder configured to receive the constrained vector from the EM signal processing component and generate an unconstrained vector; and an ECC decoder configured to decode the unconstrained vector.

In some examples, the ECC encoder is configured to encode the data using an S-polarity encoding scheme that combines a Reed Solomon (RS) encoding scheme and a polarity encoding scheme. In some examples, the ECC encoder includes a reduced frame size and a reduced redundancy level configured for mobile architectures. In some examples, the restricted channel encoder is configured to identify data from a next word line of the memory device before encoding the output of the ECC encoder for a current word line of the memory device.

In some examples, the RLPP components include a wordline proxy, a level proxy, and a block proxy. In some examples, the EM signal processing component is configured to provide the LLR values to an ECC decoder. In some examples, EM signal processing components are configured for mobile architectures based on reduced sample sizes.

Some examples of the above-described mobile electronic devices and methods may also include a machine-learning interference continuation elimination component configured to receive the noisy wordline vector from the ECC decoder and provide the denoised wordline vector to the ECC decoder.

Some examples of the above mobile electronic devices and methods may also include a neural network decoder configured to receive the word line data vector and the word line voltage vector and generate a recovered data vector. In some examples, the neural network decoder includes a reduced number of nodes, wherein the reduced number of nodes are selected for the mobile architecture.

In some examples, the individual cells of the memory device include 5-bit or 6-bit NAND flash memory cells. In some examples, a memory controller includes a simplified memory controller architecture configured for reduced power consumption.

A method of data storage for a mobile device is described. Embodiments of the method may include: receiving a data block; encoding the data block based on an ECC encoding scheme; encoding the data block based on a constrained coding scheme; and programming the encoded data block to the memory device using RLPP

In some examples, the ECC encoding scheme includes an S-polarity encoding scheme that combines an RS encoding scheme and a polarity encoding scheme. Some examples of the above method may further include identifying data from a next word line of the memory device, wherein the constrained encoding scheme is based on the data from the next word line.

A method of data storage for a mobile device is described. Embodiments of the method may include: reading a block of data from a memory device; processing the data block using EM signal processing to classify individual bits of the data block with LLR values; decoding the data block based on a constrained coding scheme; and decoding the data block based on the ECC coding scheme

In some examples, the ECC encoding scheme includes an S-polarity encoding scheme that combines an RS encoding scheme and a polarity encoding scheme. In some examples, decoding of the data block based on the ECC encoding scheme is performed based at least in part on the LLR values.

Some examples of the above method may further include determining that decoding based on the ECC encoding scheme is insufficient. Some examples may also include performing machine learning interference successive cancellation based on the determination. Some examples of the above method may further include determining that decoding based on the ECC encoding scheme is insufficient. Some examples may also include decoding the data block using a neural network decoder.

The description and drawings described herein represent example configurations and are not intended to represent all implementations within the scope of the claims. For example, operations and steps may be rearranged, combined, or otherwise modified. Additionally, structures and devices may be shown in block diagram form in order to represent relationships between components and to avoid obscuring the described concepts. Similar components or features may have the same name, but may have different numbers corresponding to different figures.

Some modifications to the disclosure may be readily apparent to those skilled in the art, and the principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

The described methods may be implemented or performed by a device comprising a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general purpose processor may be a microprocessor, a conventional processor, a controller, a microcontroller, or a state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). Thus, the functions described herein may be implemented in hardware or software and executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on a computer-readable medium in the form of instructions or code.

Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of code or data. A non-transitory storage medium may be any available medium that can be accessed by a computer. For example, a non-transitory computer-readable medium may include Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Compact Disk (CD) or other optical disk storage, magnetic disk storage, or any other non-transitory medium for carrying or storing data or code.

Additionally, a connected component may be properly termed a computer-readable medium. For example, if the code or data is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technology such as infrared, radio, or microwave signals, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technology is included in the definition of medium. Combinations of the media are also included within the scope of computer-readable media.

In the present disclosure and appended claims, the word "or" indicates an inclusive list such that, for example, a list of X, Y or Z means X or Y or Z or XY or XZ or YZ or XYZ. In addition, the phrase "based on" is not used to denote a closed set of conditions. For example, a step described as "based on condition a" may be based on both condition a and condition B. In other words, the phrase "based on" should be interpreted to mean "based, at least in part, on. In addition, the words "a" or "an" indicate "at least one".

35页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:双通道DDR动态随机存取存储器的减少的纠错码

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!