Memory device and program operation thereof

文档序号:90989 发布日期:2021-10-08 浏览:32次 中文

阅读说明:本技术 存储器器件及其编程操作 (Memory device and program operation thereof ) 是由 万维俊 于 2021-06-02 设计创作,主要内容包括:在某些方面中,一种存储器器件包括在列和行中的存储器单元阵列、分别耦合到行的字线、分别耦合到列的位线、以及通过位线和字线耦合到存储器单元阵列并且被配置为基于当前数据页对选择行进行编程的外围电路。每个存储器单元被配置为以2~(N)个电平中的一个电平存储N位数据的片段,其中,N是大于1的整数。外围电路包括分别耦合到位线的页缓冲器电路。每个页缓冲器电路包括一个高速缓存存储单元、一个多用途存储单元和N-1个数据存储单元。高速缓存存储单元被配置为依次接收当前数据页的N个位以及下一数据页的N个位,并且依次存储当前数据页的N个位中的一位以及下一数据页的N个位中的每一位。多用途存储单元被配置为依次存储非数据页信息和下一数据页的N个位中的一位。数据存储单元均被配置为存储当前数据页的N个位中的相应的一位。(In certain aspects, a memory device includes an array of memory cells in columns and rows, word lines respectively coupled to the rows, bit lines respectively coupled to the columns, and peripheral circuitry coupled to the array of memory cells through the bit lines and word lines and configured to program a selected row based on a current page of data. Each memory cell is configured with 2 N One of the levels stores a segment of N bits of data, where N is an integer greater than 1. The peripheral circuits include page buffer circuits respectively coupled to the bit lines. Each page buffer circuit includes a cache memory unit, a multi-purpose memory unit, and N-1 data memory units. The cache storage unit is configured to sequentially receive N bits of a current page of data and N bits of a next page of data, and to sequentially store one of the N bits of the current page of data toAnd each of the N bits of the next page of data. The multi-purpose storage unit is configured to store non-data page information and one of the N bits of the next data page in sequence. The data storage cells are each configured to store a respective one of the N bits of the current page of data.)

1. A memory device, comprising:

a memory cell array in a plurality of columns and a plurality of rows, each memory cell configured at 2NOne of the levels stores a segment of N bits of data, where N is an integer greater than 1;

a plurality of word lines coupled to the rows of memory cells, respectively;

a plurality of bit lines respectively coupled to the columns of the memory cells; and

peripheral circuitry coupled to the array of memory cells through the bit lines and the word lines and configured to program a selected one of the rows of the memory cells based on a current page of data, the peripheral circuitry including a plurality of page buffer circuits respectively coupled to the bit lines, each page buffer circuit including:

one cache storage unit configured to receive N bits of the current page of data and N bits of a next page of data in sequence and to store one of the N bits of the current page of data and each of the N bits of the next page of data in sequence when programming the select line based on the current page of data;

one multi-purpose storage unit configured to sequentially store non-data page information and one of the N bits of the next data page when programming the select row based on the current data page; and

n-1 data storage cells, each configured to store a respective one of the N bits of the current page of data when the selected row is programmed based on the current page of data.

2. The memory device of claim 1, wherein the non-data page information includes a voltage level applied to a corresponding bit line.

3. The memory device of claim 1 or 2, wherein to program the selected row based on the current page of data, the peripheral circuitry is configured to sequentially program the 2 sN2 of one levelN-1 level verifies the selected row.

4. The memory device of claim 3, wherein the cache storage unit is configured to:

in the above 2NStoring the one of the N bits of the current page of data prior to verifying a next-to-last level of the levels; and is

In the above 2NAfter verifying a corresponding one of the last N levels of the levels, sequentially storing each of the N bits of the next page of data.

5. The memory device of claim 3 or 4, wherein the multi-purpose storage unit is configured to:

in the above 2NStoring the non-data page information before verifying a last one of the levels; and is

In the above 2NAfter the last one of the levels is verified, the one of the N bits of the next page of data is stored.

6. The memory device of any of claims 3-5, wherein at least one of the data storage units is configured to sequentially store the respective one of the N bits of the current page of data and a respective one of the N bits of the next page of data.

7. The memory device of claim 6, wherein one of the at least one of the data storage units is configured to:

in the above 2NStoring the respective one of the N bits of the current page of data prior to verifying a penultimate (N-1) level of levels; and is

In the above 2NAfter the (N-1) last level of the levels is verified, the respective one of the N bits of the next page of data is stored.

8. The memory device of any of claims 3-7, wherein the peripheral circuitry further comprises a word line driver coupled to the word line and configured to:

applying a program voltage on a selected one of the word lines coupled to the selected row; and is

Sequentially applying 2 on the selected word lineN-1 verify voltages, said 2N-1 verify voltage corresponds to said 2N2 of one levelN-1 level.

9. The memory device of any of claims 1-8, wherein each of the cache storage unit, the multipurpose storage unit, and the data storage unit comprises a latch.

10. The memory device of any of claims 1-9, wherein the peripheral circuitry is further configured to program a next selected row of the rows of memory cells based on the next page of data after programming the selected row based on the current page of data.

11. The memory device of any one of claims 1-10, wherein the page buffer circuit includes one cache latch, two data latches, one 3-bit line latch, and one sense/program latch.

12. The memory device of claim 1, wherein the one cache storage unit comprises the one cache latch, the N-1 data storage units comprise the two data latches, and the multipurpose storage unit comprises the 3-bit line latch.

13. A system, comprising:

a memory device configured to store data, the memory device comprising:

a memory cell array in a plurality of columns and a plurality of rows, each memory cell configured at 2NOne of the levels stores a segment of N bits of data, where N is an integer greater than 1;

a plurality of word lines coupled to the rows of memory cells, respectively;

a plurality of bit lines respectively coupled to the columns of the memory cells; and

peripheral circuitry coupled to the array of memory cells through the bit lines and the word lines and configured to program a selected one of the rows of the memory cells based on a current page of data, the peripheral circuitry including a plurality of page buffer circuits respectively coupled to the bit lines, each page buffer circuit including:

one cache storage unit configured to receive N bits of the current page of data and N bits of a next page of data in sequence and to store one of the N bits of the current page of data and each of the N bits of the next page of data in sequence when programming the select line based on the current page of data;

one multi-purpose storage unit configured to sequentially store non-data page information and one of the N bits of the next data page when programming the select row based on the current data page; and

n-1 data storage units, each data storage unit configured to store a respective one of the N bits of the current page of data when the selected row is programmed based on the current page of data; and

a memory controller coupled to the memory device and configured to control the memory device.

14. The system of claim 13, wherein the non-data page information includes a voltage level applied to a corresponding bit line.

15. The system of claim 13 or 14, wherein to program the selected row based on the current page of data, the peripheral circuitry is configured to sequentially program the 2 sN2 of one levelN-1 level verifies the selected row.

16. The system of claim 15, wherein the cache storage unit is configured to:

at the time of verifying 2NStoring the one of the N bits of the current page of data prior to a penultimate N level of a plurality of levels; and is

At the time of verifying 2NEach of the N bits of the next page of data is stored sequentially after a respective one of a last N levels of the levels.

17. The system of claim 15 or 16, wherein the multipurpose storage unit is configured to:

at the time of verifying 2NStoring the non-data page information prior to the last one of the levels; and is

At the time of verifying 2NAfter the last one of the levels, storing the N of the next pages of dataThe one of the bits.

18. The system of any of claims 15-17, wherein at least one of the data storage units is configured to sequentially store the respective one of the N bits of the current page of data and a respective one of the N bits of the next page of data.

19. The system of claim 18, wherein one of the at least one of the data storage units is configured to:

at the time of verifying 2NStoring the respective one of the N bits of the current page of data prior to a penultimate (N-1) level of levels; and is

At the time of verifying 2NAfter the (N-1) last level of the levels, storing the respective one of the N bits of the next page of data.

20. The system of any of claims 15-19, wherein the peripheral circuitry further comprises a word line driver coupled to the word line and configured to:

applying a program voltage on a selected one of the word lines coupled to the selected row; and is

Sequentially applying 2 on the selected word lineN-1 verify voltages, said 2N-1 verify voltage corresponds to said 2N2 of one levelN-1 level.

21. The system of any of claims 13-20, wherein each of the cache storage unit, the multipurpose storage unit, and the data storage unit comprises a latch.

22. The system of any of claims 13-21, wherein the peripheral circuitry is further configured to program a next selected one of the rows of memory cells based on the next page of data after programming the selected row based on the current page of data.

23. A method for operating a memory device comprising a plurality of rows of memory cells, the method comprising:

receiving N bits of a current page of data;

storing one of the N bits of the current page of data in one cache storage unit and a corresponding one of the N bits of the current page of data in each of N-1 data storage units;

storing the non-data page information in a multi-purpose storage unit;

programming a selected one of the rows of memory cells based on the current page of data;

verifying the selected rows in sequence until 2NA last nth level of the plurality of levels;

receiving N bits of a next page of data;

in the above 2NAfter verifying a corresponding one of the last N levels of the plurality of levels, sequentially storing each of the N bits of the next page of data in the cache storage unit; and

in the above 2NAfter the last one of the levels is verified, one of the N bits of the next page of data is stored in the multi-purpose storage unit.

24. The method of claim 23, further comprising:

in the above 2NStoring the respective one of the N bits of the current page of data in one of the data storage units prior to verifying a penultimate (N-1) level of the levels; and

in the above 2NAfter verifying the (N-1) last level of the levels, theA respective one of the N bits of a next page of data is stored in the data storage unit.

25. The method of claim 23 or 24, further comprising programming a next selected one of the rows of memory cells based on the next page of data.

26. The method of any of claims 23-25, wherein the non-data page information includes a voltage level applied to a corresponding bit line.

27. The method of any of claims 23-26, wherein each of the cache storage unit, the multipurpose storage unit, and the data storage unit comprises a latch.

Background

The present disclosure relates to a memory device and a method of operating the same.

Flash memory is a low cost, high density, non-volatile solid-state storage medium that can be electrically erased and reprogrammed. The flash memory includes a NOR flash memory and a NAND flash memory. Various operations, such as reading, programming (writing), and erasing, may be performed by the flash memory to change the threshold voltage of each memory cell to a desired level. For NAND flash memories, an erase operation may be performed at a block level, and a program operation or a read operation may be performed at a page level.

Disclosure of Invention

In one aspect, a memory device includes an array of memory cells in a plurality of columns and a plurality of rows, a plurality of word lines respectively coupled to the rows of memory cells, a plurality of bit lines respectively coupled to the columns of memory cells, and peripheral circuitry coupled to the array of memory cells through the bit lines and word lines and configured to program a selected one of the rows of memory cells based on a current page of data. Each memory cell is configured with 2NOne of the levels stores a segment of N bits of data, where N is an integer greater than 1. The peripheral circuit includes a plurality of page buffer circuits respectively coupled to the bit lines. Each page buffer circuit includes a cache memory unit, a multi-purpose memory unit, and N-1 data memory units. The cache storage unit is configured to sequentially receive the N bits of the current page of data and the N bits of the next page of data and to sequentially store one of the N bits of the current page of data and each of the N bits of the next page of data when programming the selected row based on the current page of data. The multi-purpose memory cell is configured to sequentially store non-data page information and one of the N bits of a next data page when programming a select row based on a current data page. The data storage units are each configured to store N of the current page of data when programming the select row based on the current page of dataA corresponding one of the bits.

In another aspect, a system includes a memory device configured to store data and a memory controller coupled to the memory device and configured to control the memory device. The memory device includes an array of memory cells in a plurality of columns and a plurality of rows, a plurality of word lines respectively coupled to the rows of memory cells, a plurality of bit lines respectively coupled to the columns of memory cells, and peripheral circuitry coupled to the array of memory cells through the bit lines and word lines and configured to program a selected one of the rows of memory cells based on a current page of data. Each memory cell is configured with 2NOne of the levels stores a segment of N bits of data, where N is an integer greater than 1. The peripheral circuit includes a plurality of page buffer circuits respectively coupled to the bit lines. Each page buffer circuit includes a cache memory unit, a multi-purpose memory unit, and N-1 data memory units. The cache storage unit is configured to sequentially receive the N bits of the current page of data and the N bits of the next page of data and to sequentially store one of the N bits of the current page of data and each of the N bits of the next page of data when programming the selected row based on the current page of data. The multi-purpose memory cell is configured to sequentially store non-data page information and one of the N bits of a next data page when programming a select row based on a current data page. The data storage cells are each configured to store a respective one of the N bits of the current page of data when programming the select row based on the current page of data.

In yet another aspect, a method for operating a memory device is provided. The memory device includes a plurality of rows of memory cells. N bits of a current page of data are received. One of the N bits of the current page of data is stored in one cache memory location and a corresponding one of the N bits of the current page of data is stored in each of the N-1 data memory locations. The non-data page information is stored in a multi-purpose storage unit. A selected row of the row of memory cells is programmed based on the current page of data. The selected rows are verified in turn until 2NElectric applianceThe nth last level of the levels. The N bits of the next page of data are received. At 2NAfter verifying a corresponding one of the last N levels of the levels, each of the N bits of the next page of data is stored in the cache memory unit in turn. At 2NAfter the last of the levels is verified, one of the N bits of the next page of data is stored in the multi-purpose memory cell.

Drawings

The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate aspects of the present disclosure and, together with the description, further serve to explain the principles of the disclosure and to enable a person skilled in the pertinent art to make and use the disclosure.

FIG. 1 illustrates a block diagram of a system having a memory device in accordance with some aspects of the present disclosure.

FIG. 2A illustrates a diagram of a memory card having a memory device in accordance with some aspects of the present disclosure.

Fig. 2B illustrates a diagram of a Solid State Drive (SSD) with memory devices according to some aspects of the disclosure.

FIG. 3 illustrates a schematic diagram of a memory device including peripheral circuitry in accordance with some aspects of the present disclosure.

Fig. 4 illustrates a side view of a cross section of a memory cell array including NAND memory strings, in accordance with some aspects of the present disclosure.

Fig. 5 illustrates a block diagram of a memory device including an array of memory cells and peripheral circuitry, in accordance with some aspects of the present disclosure.

FIG. 6 illustrates threshold voltage distributions of memory cells in a programming operation, in accordance with some aspects of the present disclosure.

Fig. 7 illustrates a detailed block diagram of a page buffer in a programming operation, according to some aspects of the present disclosure.

FIG. 8 illustrates a timing diagram for a multi-cache data load in a program operation, in accordance with some aspects of the present disclosure.

Fig. 9A and 9B illustrate waveforms of word line voltages applied to a selected word line in a programming operation according to some aspects of the present disclosure.

FIG. 10 illustrates a schematic diagram of multi-cache data loading in a programming operation, in accordance with aspects of the present disclosure.

FIG. 11 illustrates a flow chart of a method for operating a memory device in accordance with some aspects of the present disclosure.

The present disclosure will be described with reference to the accompanying drawings.

Detailed Description

While specific configurations and arrangements are discussed, it should be understood that this is done for illustrative purposes only. As such, other configurations and arrangements may be used without departing from the scope of the present disclosure. Furthermore, the present disclosure may also be used in various other applications. The functional and structural features as described in this disclosure may be combined, adjusted and modified with each other, and in a manner not specifically depicted in the drawings, so that such combinations, adjustments and modifications are within the scope of the present disclosure.

In general, terms may be understood at least in part from the context of their use. For example, the term "one or more" as used herein may be used to describe any feature, structure, or characteristic in the singular or may be used to describe a combination of features, structures, or characteristics in the plural, depending, at least in part, on the context. Similarly, terms such as "a" or "the" may also be understood to convey a singular use or to convey a plural use, depending, at least in part, on the context. Additionally, the term "based on" may be understood as not necessarily intended to convey an exclusive set of factors, and may instead allow for the presence of additional factors not necessarily explicitly described, again depending at least in part on the context.

Memory devices (e.g., NAND flash memory devices) can store more than a single bit of information into each memory cell in multiple levels (also referred to as states) in order to increase storage capacity and reduce cost per bit. In a programming operation, data may be programmed (written) into the xLC (e.g., multi-level cell (MLC), three-level cell (TLC), four-level cell (QLC), etc.). For some memory devices with xLC, a cache program command may be used to allow data insertion for one page of data while programming for another page of data is currently being performed. To shrink the peripheral circuit size, memory devices typically include only one cache latch for each Bit Line (BL), which allows only one bit of data from the next page of data (e.g., referred to as the lower page "LP") to be inserted while programming with the current page of data. After the current data page program execution is complete, other bit data in the next data page (e.g., middle page "MP" and upper page "UP") needs to be inserted. As a result, additional windows are required between programming adjacent data pages to load portions of the next data page, which affects the performance of sequential programming operations, e.g., increases programming speed.

Although data loading windows may be reduced or even avoided by some multi-cache data loading schemes that utilize not only cache latches but also some data latches to cache more data bits from the next page of data, those schemes still require at least the same number of data latches as the number of data bits in each xLC (e.g., three data latches for TLC and four data latches for QLC) and dedicated cache latches. As the total number of data latches and cache latches increases proportionally with the number of bit lines, the size of the page buffer with latches becomes the major burden to shrink the size of the memory device as the memory cell density increases.

To address one or more of the above issues, the present disclosure introduces a solution that reuses some latches in a page buffer for multi-cache data loads in a programming operation. As a result, the number of latches required per bit line can be further reduced to, for example, 5 latches, while still reducing or even avoiding the data loading window for sequential program operations. The cache latches may be used not only to cache the next page of data, but also to store portions of the current page of data, thereby replacing one of the dedicated data latches. In some embodiments, to avoid a data loading window, another latch in the page buffer (e.g., a latch used to store bit line voltage level information) is also reused to cache portions of the next page of data at some stage while programming is being performed using the current page of data. Thus, sequential programming performance (e.g., with fast programming speed) may be improved without any circuit size cost.

Fig. 1 illustrates a block diagram of a system 100 having a memory device in accordance with some aspects of the present disclosure. System 100 may be a mobile phone, desktop computer, laptop computer, tablet computer, vehicle computer, game console, printer, positioning device, wearable electronic device, smart sensor, Virtual Reality (VR) device, Augmented Reality (AR) device, or any other suitable electronic device having storage therein. As shown in fig. 1, system 100 may include a host 108 and a memory system 102, memory system 102 having one or more memory devices 104 and a memory controller 106. Host 108 may be a processor (e.g., a Central Processing Unit (CPU)) or a system on a chip (SoC) (e.g., an Application Processor (AP)) of an electronic device. Host 108 may be configured to send data to memory device 104 or receive data from memory device 104.

Memory device 104 may be any memory device disclosed in the present disclosure. As disclosed in detail below, the memory device 104 (e.g., a NAND flash memory device) may be based on a data page pair xLC (i.e., configured to be 2) with N-bit data for each xLCNMemory cells of one of the levels storing a segment of N-bit data, where N is an integer greater than 1) perform a programming operation. Consistent with the scope of the present disclosure, a multi-cache data loading scheme may be implemented with a page buffer (e.g., having a 5-latch configuration) of memory device 104, the page buffer having one cache storage unit (e.g., a cache latch) configured to store one bit of a current data page and each bit of a next data page in turn, and a multi-purpose storage unit (e.g., a 3-Bit Line (BL) latch) configured to store each bit of the current data page and the next data page in turnThe non-data page information and one bit of the next data page are stored next.

According to some embodiments, memory controller 106 is coupled to memory device 104 and host 108, and is configured to control memory device 104. Memory controller 106 may manage data stored in memory device 104 and communicate with host 108. In some implementations, the memory controller 106 is designed for operation in a low duty cycle environment, such as a Secure Digital (SD) card, Compact Flash (CF) card, Universal Serial Bus (USB) flash drive, or other medium for use in electronic devices such as personal computers, digital cameras, mobile phones, and so forth. In some implementations, the memory controller 106 is designed for operation in a high duty cycle environment SSD or embedded multimedia card (eMMC) that is used as a data store and enterprise storage array for mobile devices such as smart phones, tablet computers, laptop computers, and the like. The memory controller 106 may be configured to control the operation (e.g., read, erase, and program operations) of the memory device 104. The memory controller 106 may also be configured to manage various functions with respect to data stored or to be stored in the memory devices 104, including but not limited to bad block management, garbage collection, logical to physical address translation, wear leveling, and the like. In some implementations, the memory controller 106 is also configured to process Error Correction Codes (ECC) with respect to data read from the memory device 104 or written to the memory device 104. The memory controller 106 may also perform any other suitable functions, such as formatting the memory device 104. The memory controller 106 may communicate with external devices (e.g., the host 108) according to a particular communication protocol. For example, the memory controller 106 may communicate with the external device via at least one of various interface protocols, such as a USB protocol, a multi-media card (MMC) protocol, a Peripheral Component Interconnect (PCI) protocol, a PCI express (PCI-E) protocol, an Advanced Technology Attachment (ATA) protocol, a serial ATA protocol, a parallel ATA protocol, a Small Computer System Interface (SCSI) protocol, an Enhanced Small Disk Interface (ESDI) protocol, an Integrated Drive Electronics (IDE) protocol, a Firewire protocol, and so forth.

The memory controller 106 and the one or more memory devices 104 may be integrated into various types of storage devices, for example, included in the same package (e.g., a Universal Flash Storage (UFS) package or an eMMC package). That is, the memory system 102 may be implemented and packaged into different types of end electronic products. In one example as shown in FIG. 2A, the memory controller 106 and the single memory device 104 may be integrated into a memory card 202. The memory card 202 may include a PC card (PCMCIA), a CF card, a Smart Media (SM) card, a memory stick, a multimedia card (MMC, RS-MMC, MMCmicro), an SD card (SD, miniSD, microSD, SDHC), a UFS, and the like. Memory card 202 may also include a memory card connector 204 that couples memory card 202 with a host (e.g., host 108 in FIG. 1). In another example as shown in fig. 2B, the memory controller 106 and the plurality of memory devices 104 may be integrated into the SSD 206. SSD206 can also include an SSD connector 208 that couples SSD206 with a host (e.g., host 108 in fig. 1). In some implementations, the storage capacity and/or operating speed of SSD206 is greater than the storage capacity and/or operating speed of memory card 202.

Fig. 3 illustrates a schematic circuit diagram of a memory device 300 including peripheral circuitry in accordance with some aspects of the present disclosure. Memory device 300 may be an example of memory device 104 in fig. 1. The memory device 300 may include a memory cell array device 301 and peripheral circuitry 302 coupled to the memory cell array device 301. The memory cell array device 301 may be a NAND flash memory cell array, in which memory cells 306 are provided in the form of an array of NAND memory strings 308, each NAND memory string 308 extending vertically above a substrate (not shown). In some implementations, each NAND memory string 308 includes multiple memory cells 306 coupled in series and stacked vertically. Each memory cell 306 may hold a continuous analog value, e.g., a voltage or charge, that depends on the number of electrons trapped within the area of the memory cell 306. Each memory cell 306 may be a floating gate type memory cell including a floating gate transistor or a charge trapping type memory cell including a charge trapping transistor.

In some implementations, each memory cell 306 is a single-level cell (SLC) that has two possible memory states (levels) and thus can store one bit of data. For example, a first memory state "0" may correspond to a first threshold voltage range, and a second memory state "1" may correspond to a second threshold voltage range. In some implementations, each memory cell 306 is an xLC capable of storing more than a single bit of data in more than four memory states (levels). For example, xLC may store two bits per cell (MLC), three bits per cell (TLC), or four bits per cell (QLC). Each xLC can be programmed to assume a range of possible nominal storage values (i.e., 2 out of N-bit data)NA segment, e.g., gray code). In one example, an MLC may be programmed to assume one of three possible programmed levels from an erased state by writing one of three possible nominal storage values to the cell. The fourth nominal storage value may be used for the erased state.

As shown in fig. 3, each NAND memory string 308 may also include a Source Select Gate (SSG) transistor 310 at its source terminal and a Drain Select Gate (DSG) transistor 312 at its drain terminal. The SSG transistors 310 and the DSG transistors 312 may be configured to activate selected NAND memory strings 308 (columns of the array) during read and program operations. In some implementations, the sources of the NAND memory strings 308 in the same block 304 are coupled by the same Source Line (SL)314 (e.g., a common SL). In other words, according to some embodiments, all NAND memory strings 308 in the same block 304 have an Array Common Source (ACS). According to some embodiments, the drain of each NAND memory string 308 is coupled to a respective bit line 316, and data can be read from or written to the bit line 316 via an output bus (not shown). In some implementations, each NAND memory string 308 is configured to be selected or deselected by applying a select or deselect voltage to the gate of the corresponding DSG transistor 312 via one or more DSG lines 313 and/or by applying a select or deselect voltage to the gate of the corresponding SSG transistor 310 via one or more SSG lines 315.

As shown in fig. 3, the NAND memory strings 308 may be organized into a plurality of blocks 304, each of the plurality of blocks 304 may have a common source line 314 (e.g., coupled to ground). In some implementations, each block 304 is the basic unit of data for an erase operation, i.e., all memory cells 306 on the same block 304 are erased at the same time. To erase memory cells 306 in select block 304, a source line 314 coupled to select block 304 and unselected blocks 304 in the same plane as select block 304 may be biased with an erase voltage (Vers) (e.g., a high positive bias voltage (e.g., 20V or higher)). The memory cells 306 of adjacent NAND memory strings 308 may be coupled by word lines 318, the word lines 318 selecting which row of memory cells 306 is affected by read and program operations. In some embodiments, each word line 318 is coupled to a page 320 of memory cells 306, page 320 being the basic unit of data for read and program operations. The size of a page 320 in bits may be related to the number of NAND memory strings 308 coupled by a word line 318 in one block 304. Each word line 318 may include a plurality of control gates (gate electrodes) at each memory cell 306 in a respective page 320 and a gate line coupling the control gates.

As shown in fig. 3, the memory cell array 301 may include an array of memory cells 306 in a plurality of rows and a plurality of columns in each block 304. According to some implementations, a row of memory cells 306 corresponds to one or more pages 320 and a column of memory cells corresponds to one NAND memory string 308. Multiple rows of memory cells 306 may be respectively coupled to word lines 318, and multiple columns of memory cells 306 may be respectively coupled to bit lines 316. Peripheral circuitry 302 may be coupled to memory cell array 301 through bit lines 316 and word lines 318.

Fig. 4 illustrates a side view of a cross section of a memory cell array 301 including NAND memory strings 308, in accordance with some aspects of the present disclosure. As shown in fig. 4, NAND memory strings 308 may extend vertically through memory stack layer 404 above substrate 402. Substrate 402 may include silicon (e.g., single crystal silicon), silicon germanium (SiGe), gallium arsenide (GaAs), germanium (Ge), silicon-on-insulator (SOI), germanium-on-insulator (GOI), or any other suitable material.

Memory stack layer 404 may include interleaved gate conductive layers 406 and gate-to-gate dielectric layers 408. The number of pairs of gate conductive layers 406 and gate-to-gate dielectric layers 408 in memory stack layer 404 may determine the number of memory cells 306 in memory cell array 301. The gate conductive layer 406 may include a conductive material including, but not limited to, tungsten (W), cobalt (Co), copper (Cu), aluminum (Al), polysilicon, doped silicon, silicide, or any combination thereof. In some embodiments, each gate conductive layer 406 includes a metal layer, for example, a tungsten layer. In some embodiments, each gate conductive layer 406 comprises a doped polysilicon layer. Each gate conductive layer 406 may include a control gate surrounding a memory cell 306, a gate of a DSG transistor 312, or a gate of an SSG transistor 310, and may extend laterally at the top of the memory stack 404 as a DSG line 313, laterally at the bottom of the memory stack 404 as an SSG line 315, or laterally between the DSG line 313 and the SSG line 315 as a word line 318.

As shown in FIG. 4, NAND memory string 308 includes a channel structure 412 that extends vertically through memory stack layer 404. In some implementations, the channel structure 412 includes a channel hole filled with semiconductor material(s) (e.g., as a semiconductor channel 420) and dielectric material(s) (e.g., as a memory film 418). In some embodiments, the semiconductor channel 420 comprises silicon, e.g., polysilicon. In some embodiments, the memory film 418 is a composite dielectric layer that includes a tunneling layer 426, a storage layer 424 (also referred to as a "charge trapping/storage layer"), and a blocking layer 422. The channel structure 412 may have a cylindrical shape (e.g., a pillar shape). According to some embodiments, the semiconductor channel 420, the tunneling layer 426, the storage layer 424, and the blocking layer 422 are radially arranged in this order from the center of the pillar toward the outer surface of the pillar. The tunneling layer 426 may include silicon oxide, silicon oxynitride, or any combination thereof. The memory layer 424 may include silicon nitride, silicon oxynitride, or any combination thereof. The barrier layer 422 may include silicon oxide, silicon oxynitride, a high dielectric constant (high-k) dielectric, or any combination thereof. In one example, memory film 418 may include a composite layer of silicon oxide/silicon oxynitride/silicon oxide (ONO).

According to some embodiments, as shown in fig. 4, a well 414 (e.g., a P-well and/or an N-well) is formed in the substrate 402 and the source terminal of the NAND memory string 308 is in contact with the well 414. For example, the source line 314 may be coupled to the well 414 to apply an erase voltage to the well 414 (i.e., the source of the NAND memory string 308) during an erase operation. In some implementations, the NAND memory string 308 also includes a channel plug 416 at the drain end of the NAND memory string 308. It should be understood that although not shown in fig. 4, additional components of memory cell array 301 may be formed, including but not limited to gate line apertures/source contacts, local contacts, interconnect layers, and the like.

Referring back to fig. 3, peripheral circuitry 302 may be coupled to memory cell array 301 through bit line 316, word line 318, source line 314, SSG line 315, and DSG line 313. Peripheral circuitry 302 may include any suitable analog, digital, and mixed-signal circuitry for facilitating operation of memory cell array 301 by applying voltage and/or current signals to each target memory cell 306 and sensing voltage and/or current signals from each selected memory cell 306 via bit line 316, word line 318, source line 314, SSG line 315, and DSG line 313. The peripheral circuitry 302 may include various types of peripheral circuitry formed using metal-oxide-semiconductor (MOS) technology. For example, fig. 5 shows some exemplary peripheral circuits, peripheral circuit 302 including page buffers/sense amplifiers 504, column decoders/bit line drivers 506, row decoders/word line drivers 508, voltage generators 510, control logic 512, registers 514, interfaces 516, and data bus 518. It should be understood that additional peripheral circuitry not shown in fig. 5 may also be included in some examples.

The page buffer/sense amplifier 504 may be configured to read data from the memory cell array 301 and program (write) data to the memory cell array 301 according to control signals from the control logic 512. In one example, the page buffer/sense amplifier 504 can store a page of program data (write data, also referred to herein as a "page of data") to be programmed into one page 320 of the memory cell array 301. In another example, the page buffer/sense amplifier 504 may verify the programmed selected memory cell 306 in each program/verify cycle (cycle) of the program operation to ensure that the data has been properly programmed into the memory cells 306 coupled to the selected word line 318. In yet another example, page buffer/sense amplifier 504 may also sense low power signals from bit line 316 representing data bits stored in memory cells 306 and amplify small voltage swings to recognizable logic levels in a read operation. As described in detail below and consistent with the scope of the present disclosure, in a programming operation, page buffer/sense amplifier 504 may include a plurality of page buffer circuits respectively coupled to bit lines 316, and each page buffer circuit includes a set of memory cells (e.g., latches) for temporarily storing a segment of N-bit data (e.g., in the form of gray code) received from data bus 518, and providing the segment of N-bit data to a corresponding select memory cell 306 through the corresponding bit line 316 in a programming operation using a multi-cache loading scheme.

The column decoder/bit line driver 506 may be configured to be controlled by the control logic 512 and select one or more NAND memory strings 308 by applying a bit line voltage generated from the voltage generator 510. The row decoder/word line drivers 508 may be configured to be controlled by the control logic 512 and to select/deselect the blocks 304 of the memory cell array 301 and to select/deselect the word lines 318 of the blocks 304. The row decoder/word line driver 508 may also be configured to drive the word line 318 using the word line voltage generated from the voltage generator 510. In some implementations, the row decoder/word line drivers 508 can also select/deselect and drive the SSG lines 315 and DSG lines 313. The voltage generator 510 may be configured to be controlled by the control logic 512 and to generate a word line voltage (e.g., a read voltage, a program voltage, a channel pass voltage, a local voltage, a verify voltage, etc.), a bit line voltage, and a source line voltage to be supplied to the memory cell array 301.

Control logic 512 may be coupled to each of the peripheral circuits described above and configured to control the operation of each peripheral circuit. Registers 514 may be coupled to control logic 512 and include status, command and address registers for storing status information, command operation codes (OP codes) and command addresses for controlling the operation of each peripheral circuit. Interface 516 may be coupled to control logic 512 and act as a control buffer to buffer and relay control commands received from a memory controller (e.g., 106 in FIG. 1) and/or a host (e.g., 108 in FIG. 1) to control logic 512 and to buffer and relay status information received from control logic 512 to the memory controller and/or host. The interface 516 may also be coupled to the column decoder/bit line drivers 506 via a data bus 518 and act as a data input/output (I/O) interface and data buffer to buffer data and relay data to or from the memory cell array 301.

FIG. 6 illustrates example threshold voltage distributions of memory cells in a programming operation, according to some aspects of the present disclosure. As described above, each memory cell 306 may be configured at 2NOne of the levels stores a segment of N-bit data, where N is an integer greater than 1 (e.g., N-2 for MLC, N-3 for TLC, N-4 for QLC, etc.). Each level may correspond to 2 of memory cell 306NOne of a range of threshold voltages (Vth). Taking TLC (where N ═ 3) as an example, as shown in fig. 6, memory cell 306 can be programmed to one of 8 levels, with 8 levels including one for the erased state and 7 for the programmed state. Each level may correspond to a respective threshold voltage (Vth) range of the memory cell 306. For example, a level corresponding to the lowest threshold voltage range (leftmost threshold voltage distribution in FIG. 6) may be considered as level 0, a level corresponding to the second lowest threshold voltage range (the second threshold voltage distribution from the left in FIG. 6) may be considered as level 1, and so on until corresponding to the highest threshold voltage rangeLevel 7 (rightmost threshold voltage distribution in fig. 6).

On the other hand, each level may correspond to 2 of the N-bit data to be stored in the selected memory cell 306NOne of the fragments. In some embodiments, 2 of the N bits of dataNThe individual segments may be represented by gray codes (in the form of gray codes). Gray codes (also known as Reflection Binary Codes (RBCs) or reflection binaries) are an ordering of a system of binary digits such that two consecutive values differ only in one bit (binary digit). For example, table 1 below shows an example of a binary code representing one-to-one mapping between 8 levels (LV0 to LV7) and 8 segments of 3-bit data used in the example of fig. 6. As shown in table 1, each piece of 3-bit data may be composed of three-bit binary values (b1, b2, and b 3). In one example, level 1 may correspond to a segment of 3-bit data having a value of 000. In another example, level 7 may correspond to another segment of 3-bit data having a value of 101.

TABLE 1

LV 0 1 2 3 4 5 6 7
b1 1 0 1 0 0 1 0 1
b2 1 0 0 1 0 1 1 0
b3 1 0 0 0 1 0 1 1

Referring also to fig. 5, in a programming operation, a page of data of N bits of data having N pages (also referred to as portions) may be used to program the memory cells 306 of a selected row coupled to a selected word line 318. In other words, the peripheral circuitry 302 may be configured to program the memory cells 306 of the selected row based on the current page of data (N-bits of data with N pages). In some implementations, the user data is transmitted over the data bus 518 to the page buffer/sense amplifiers 504, and the page buffer/sense amplifiers 504 are configured to convert the user data to each page of data to be programmed into the respective row of memory cells 306 based on a preset gray code. According to some embodiments, based on a preset gray code defining a mapping of each programming level to a corresponding segment of N-bit data, control logic 512 is configured to send a control signal (e.g., an enable signal) to page buffer/sense amplifier 504 to allow page buffer/sense amplifier 504 to generate sequential pages of data for sequential programming operations. Depending on the number N (e.g., whether the memory cell 306 is MLC, TLC, QLC, etc.), each page of data may include N pages (also referred to as portions) that may be loaded into the page buffer/sense amplifier 504 separately and moved around within the page buffer/sense amplifier 504, as described in detail below. During an ongoing programming operation, a current page of data may be temporarily stored in the page buffer/sense amplifier 504, and the page buffer/sense amplifier 504 may be configured to provide a corresponding segment of N-bits of data to each memory cell 306 coupled to the selected word line 318 through the corresponding bit line 316.

For example, fig. 7 shows a detailed block diagram of a page buffer/sense amplifier 504 in a programming operation, according to some aspects of the present disclosure. In some implementations, the page buffer/sense amplifier 504 includes a plurality of page buffer circuits 702, each page buffer circuit 702 coupled to a respective one of the bit lines 316. In other words, each page buffer circuit 702 may be coupled to a respective column of memory cells 306 (e.g., NAND memory strings 308) by a corresponding bit line 316 and configured to temporarily store a segment of N-bit data (i.e., N-bits of a current page of data) for programming a respective selected memory cell 306 (coupled to the selected word line 318 and the corresponding bit line 316) in a programming operation. All of the page buffer circuits 702 together may temporarily store N pages of an entire current page of data for programming memory cells 306 (e.g., page 320 of memory cells 306) of a selected row coupled to the selected word line 318 in a programming operation. As described above, in some embodiments, page buffer circuit 702 is further configured to preprocess a respective portion of user data received from data bus 518 and convert it to the corresponding N bits of the current data page based on a preset gray code. For example, for TLC (where N ═ 3), each page buffer circuit 702 may be configured to temporarily store a respective one of the 8 sets of 3 bits of the current data page as shown in table 1 above, with the 8 sets of 3 bits corresponding to 8 levels, respectively.

In order to reduce or even avoid data loading windows between adjacent pages of data for programming different rows of memory cells 306 in sequential programming operations, each page buffer circuit 702 may also be configured to cache a portion or all of a segment of N-bit data (i.e., N bits of a next page of data) for programming a next selected memory cell 306 in a next programming operation while a currently selected memory cell 306 is being programmed in an ongoing programming operation. All of the page buffer circuits 702 together may follow a multi-cache data loading scheme to cache one or more of the N pages of the entire next page of data for programming the next selected row of memory cells 306 coupled to the next selected word line 318 (e.g., the memory cells 306 of the next page 320) in the current programming operation.

For example, FIG. 8 illustrates a timing diagram for a multi-cache data load in a program operation, according to some aspects of the present disclosure. As shown in fig. 8, again taking TLC (where N ═ 3) as an example, 3 pages (PG 0, PG 1, and PG 2) of the 1 st data page may be loaded and stored in page buffer/sense amplifiers 504 and used to program memory cells 306 of row 1. During the time period (tPROG 1) that the memory cells 306 of row 1 are programmed, the 3 pages (PG 3, PG 4, and PG 5) of the 2 nd page of data may also be loaded and cached in the page buffer/sense amplifiers 504. In other words, the 2 nd page of data may be ready before tPROG 1 ends, so that programming of the memory cells 306 of row 2 may begin immediately after programming of the memory cells 306 of row 1 without any window for loading the 2 nd page of data. Similarly, during the time period (tPROG 2) that the memory cells 306 of row 2 are programmed, the 3 pages of the 3 rd page of data (PG 6, PG 7, and PG 8) may also be loaded and cached in the page buffer/sense amplifiers 504. As a result, the performance of sequential program operations may be improved by a multi-cache data loading scheme.

Referring back to fig. 7, to implement a multi-cache data loading scheme for sequential programming operations, each page buffer circuit 702 may include a set of data storage units 706 and cache storage units (DCs) 708. During a currently ongoing programming operation that programs the memory cells 306 of the selected row based on the current page of data, each data storage unit 706 may be configured to store a respective one of the N bits of the current page of data, and the cache storage unit 708 may be configured to store each of the N bits of the next page of data in turn (i.e., cache the N bits of the next page of data). According to some embodiments, to reduce the number of memory cells and the size of the page buffer circuit 702, the number of cache memory cells 708 is limited to one, i.e., a single cache memory cell 708 that can only store a single bit of data at the same time. Thus, the single cache storage unit 708 is configured to store each of the N bits of the next page of data in turn at different time periods during the current programming operation, as described in detail below. Further, due to the limited number (e.g., one) of the cache storage units 708, one or more of the data storage units 706 may be configured to also store one of the N bits of the next page of data (i.e., also perform a cache function) during the current programming operation when the storage bits of the current page of data are no longer needed. For example, the at least one data storage unit 706 may be configured to sequentially store a respective one of the N bits of the current page of data and a respective one of the N bits of the next page of data.

Existing multi-cache data loading schemes require that the number of data storage units in each page buffer circuit 702 is at least the same as the number of bits of the segment of data used to program the corresponding select memory unit 306, i.e., N data storage units, because a single cache storage unit is dedicated to caching the data of the next page of data. Unlike prior approaches and consistent with the scope of the present disclosure, the single cache memory location 708 in the page buffer circuit 702 in fig. 7 may also be configured to store one of the N bits of the current page of data. That is, according to some embodiments, cache storage unit 708 is configured to store one of the N bits of the current page of data and each of the N bits of the next page of data in sequence. In other words, the cache memory unit 708 may function as both a data memory unit and a cache memory unit in a time-division manner instead of one of the data memory units 706 in each page buffer circuit 702. In some embodiments, as shown in FIG. 7, the number of data storage units 706 in each page buffer circuit 702 thus becomes N-1 (D1-Dn-1). Thus, the total number of data storage units 706 and cache storage units 708 may be reduced from N +1 to N compared to existing multi-cache data loading schemes.

It should be appreciated that the total of the N data storage units 706 and the cache storage unit 708 may reduce the data load window by caching N-1 bits of the N bits of the next page of data when programming the memory cells of the currently selected row based on the current page of data, but may not completely avoid the data load window. Thus, consistent with the scope of the present disclosure, in some embodiments, another memory cell in each page buffer circuit 702 for storing non-data page information is configured to store the non-data page information and one of the N bits of the next data page in turn, thereby enabling all N-1 bits of the next data page to be cached in the current programming operation to avoid a data loading window. That is, the page buffer circuit 702 may include a multi-purpose memory unit that may store non-data page information in a time-division manner and cache data of a next data page.

Each page buffer circuit 702 may include a plurality of memory cells for storing non-data page information (i.e., any information other than the data bits in a data page). As shown in fig. 7, in some embodiments, the page buffer circuit 702 includes a sense/program memory cell (DS)712 configured to store information indicating whether a current operation performed by the page buffer/sense amplifier 504 is a read operation or a program operation, and a 3BL memory cell (DL)710 configured to store bias information of the respective bit line 316 coupled to the page buffer circuit 702. In some embodiments, 3BL memory cells 710 are multi-purpose memory cells that act as both 3BL memory cells and cache memory cells in a time-division manner. As shown in fig. 7, each page buffer circuit 702 may further include a bias circuit 704, the bias circuit 704 coupled to the respective bit line 316 and configured to apply a bit line voltage to the corresponding selected memory cell 306 coupled to the respective bit line 316 in a programming operation. Depending on whether the corresponding selected memory cell 306 passes verification at a respective level according to the N-bit data used to program the selected memory cell 306, for example, a high voltage level and a low voltage level may be used as bit line voltages to bias the respective bit lines 316. In some embodiments, to optimize threshold voltage distributions (e.g., as shown in fig. 6), for example, to expand the read margin between adjacent levels and reduce the width of each level, voltage levels may also be used for biasing bit line voltages. That is, three voltage levels (e.g., high, medium, low) may be applied to the respective bit line 316 (referred to herein as 3 BL). In some implementations, the voltage level (e.g., 3BL bias) applied to the corresponding bit line 316 is the non-data page information stored in the 3BL memory cells 710.

It should be appreciated that although 3BL memory cell 710 is described herein as an example of a multipurpose memory cell for implementing the multi-cache data loading scheme disclosed in the present disclosure, any suitable non-data page memory cell in page buffer circuit 702 (e.g., sense/program memory cell 712), or any other non-data page memory cell not shown in fig. 7, may be used as a multipurpose memory cell in some examples without adding additional memory cells into page buffer circuit 702. It should also be understood that each memory cell in page buffer circuit 702 (including each data memory cell 706, cache memory cell 708, 3BL memory cell 710, and sense/program memory cell 712) may be any circuit having two stable states for storing a single bit of data, e.g., a latch or flip-flop. In one example, each of data storage unit 706, cache storage unit 708, 3BL storage unit 710, and sense/program storage unit 712 includes a latch. In some embodiments, page buffer circuit 702 has a 5-latch configuration that includes one cache latch, two data latches, one 3BL latch, and one sense/program latch. In some embodiments, the cache storage unit 708 includes one cache latch, the data storage unit 706 includes two data latches, and the multipurpose storage unit includes one 3BL latch.

To perform a program operation, in addition to the page buffer/sense amplifier 504 providing each selected memory cell 306 with a corresponding segment of N-bit data, the row decoder/word line drivers 508 may be configured to apply a program voltage and a verify voltage to the selected word line 318 coupled to the selected row of memory cells 306 in one or more program/verify cycles to raise the threshold voltage of each selected memory cell 306 into a desired level (into a desired threshold voltage range) based on the corresponding segment of N-bit data. For example, fig. 9A and 9B show waveforms of word line voltages applied to a selected word line in a program operation. As shown in fig. 9A, the program operation includes one or more program/verify cycles (periods) 902. As shown in FIG. 9B, in each program/verify loop 902, the row decoder/word line driver 508 may be configured to apply a program voltage (Vpgm) on the selected word line 318 and, in turn, 2 with an incremental change in voltage levelN-1 verify voltage (Vvf). 2N1 verify voltage may correspond to 2NOne level (e.g., 2 in addition to one erase level)N-1 programming level) of 2N-1 level. That is, the peripheral circuit 302 may be configured at 2N2 of one levelN-1 level verifying selected rows in sequenceA memory unit 306. Each select memory cell 306 may be programmed to 2 based on the corresponding N-bit data to be stored in the select memory cell 306 (i.e., the N-bits of the current page of data stored in the corresponding page buffer circuit 702)NOne of the levels. Still taking TLC (where N ═ 3) as an example, select memory cells 306 may be programmed sequentially to one of 8 levels (e.g., as shown in fig. 6) by applying 7 verify voltages, each corresponding to one of the 7 program levels.

A multi-cache data loading scheme implemented based on the memory device disclosed herein (e.g., memory device 300 including page buffer circuit 702) is described in detail below. For example, fig. 11 shows a flow diagram of a method 1100 for operating a memory device, in accordance with some aspects of the present disclosure. The memory device may be any suitable memory device disclosed herein, for example, memory device 300. Method 1100 may be implemented by peripheral circuitry 302 (e.g., row decoder/word line drivers 508 and page buffer/sense amplifiers 504). It should be understood that the operations shown in method 1100 are not exhaustive, and that other operations may be performed before, after, or in between any of the operations shown. Further, some operations may be performed concurrently, or in a different order than shown in FIG. 11.

Referring to FIG. 11, the method 1100 begins at operation 1102, where N bits of a current page of data are received while programming memory cells of a selected row in operation 1102. For example, as shown in fig. 7, the control logic 512 may send control signals to the cache memory cells 708 of each page buffer circuit 702 to control the cache memory cells 708 to receive the data pages each having N-bit data in sequence in sequential programming operations. In a current programming operation, i.e., when programming the memory cells 306 of the selected row coupled to the selected word line 318 based on the current page of data, the cache storage units 708 may be configured to receive, in order, the N bits of the current page of data and the N bits of the next page of data immediately following the current page of data.

Method 1100 proceeds toOperation 1104, as shown in FIG. 11, in operation 1104, one of the N bits of the current page of data is stored in one cache storage unit and a corresponding one of the N bits of the current page of data is stored in each of the N-1 data storage units. For example, as shown in FIG. 7, the control logic 512 may send control signals to the single cache memory cell 708 and the set of N-1 data memory cells 706 of each page buffer circuit 702 to control the single cache memory cell 708 and the N-1 data memory cells 706 to store N bits of the current page of data, respectively. In other words, the cache storage unit 708 may also act as a data storage unit first, so that a total number N of data storage units may store N bits of the current data page, respectively. In one example, cache storage unit 708 may be configured to be at 2NOne of the N bits of the current page of data is stored before verification at the Nth from the last of the levels, and the at least one data storage cell 706 may be configured to store one of the N bits of the current page of data before verification at 2NThe corresponding one of the N bits of the current page of data is stored before the last (N-1) of the levels are verified.

The method 1100 proceeds to operation 1106 and, as shown in fig. 11, the non-data page information is stored in one multi-purpose storage unit in operation 1106. The non-data page information may include voltage levels applied to corresponding bit lines. For example, as shown in fig. 7, the control logic 512 may send a control signal to the single 3BL memory cell 710 to control the single 3BL memory cell 710 to store bit line bias information, e.g., one of three voltage levels applied to the corresponding bit line 316. In one example, 3BL memory cell 710 can be configured to be at 2NThe non-data page information is stored before the last of the levels is verified.

The method 1100 proceeds to operation 1108, as shown in FIG. 11, in operation 1108 memory cells of the selected row are programmed based on the current page of data. For example, as shown in fig. 5 and 9B, the control logic 512 may send control signals to the row decoder/word line driver 508 to apply a program voltage (Vpgm) to the selected word line 318 coupled to the memory cell 306 of the selected row.

Method 1100 proceeds to operation 1110. as shown in FIG. 11, in operation 1110, select rows are verified in sequence until 2NThe nth last level of the levels. For example, as shown in fig. 5 and 9B, control logic 512 may send control signals to page buffer/sense amplifiers 504 to sequentially apply 2 on selected word line 318N-1 verify voltage (Vvf). 2N1 verify voltage may correspond to 2N2 of one levelN-1 level. For example, for TLC (where N ═ 3), 7 verify voltages may correspond to 7 segments of 3-bit data, respectively, each segment corresponding to a respective one of 7 program levels (LV 1 to LV7) of 8 levels, as shown in table 1 above.

Still taking TLC (where N ═ 3) as an example, as shown in fig. 10, before verifying at the 3 rd last level (i.e., the 6 th level (LV5)) among the 8 levels, the cache storage unit (DC) may store one bit (current UP) of the 3 bits of the current data page, and the 1 st data storage unit (D1) may store the corresponding bit (current LP) of the current data page, and the 2 nd data storage unit (D2) may store the corresponding bit (current MP) of the current data page. Verification at each of the 1 st level (LV0) to the 5 th level (LV4) may follow gray codes as shown in table 1, for example, 111 for LV0 and 001 for LV4, before being based on 3-bit data stored in DC, D1 and D2, wherein b1, b2 and b3 may correspond to LP, MP and UP, respectively. Other memory cells may store non-data page information. For example, a 3BL memory cell (DL) may store a voltage level (3BL bias) applied to a corresponding bit line, and a sense/program memory cell (DS) may store program or read operation information (e.g., indicating that the current operation is a program operation).

The method 1100 proceeds to operation 1112, and as shown in FIG. 11, N bits of a next page of data are received in operation 1112. For example, during a current programming operation, page buffer circuit 702 may begin caching for by receiving N bits of a next page of data in sequence at cache memory unit 708The next page of data for the next program operation. The method 1100 proceeds to operation 1114, as shown in FIG. 11, in operation 1114, at 2NAfter verifying a corresponding one of the last N levels of the levels, each of the N bits of the next page of data is stored in the cache memory unit in turn. In some embodiments, at 2NA corresponding one of the N bits of the current page of data is stored in one of the data storage units before a last (N-1) one of the levels is verified. In some embodiments, at 2NAfter the verification of the last (N-1) of the levels, a corresponding one of the N bits of the next page of data is stored in the data storage unit.

For example, as shown in fig. 10, after verifying at a corresponding one of the last 3 levels (LV5, LV6, and LV7), the DC may store each of the 3 bits of the next page of data (next LP, next MP, and next UP) in turn. D1 may store the corresponding bit of the current page of data (the current LP) before verification at the 2 nd last level (i.e., the 7 th level (LV6)), and then store the corresponding bit of the next page of data (the next LP) after verification at the 2 nd last level (i.e., LV 6). While verifying at each level, D2 may store the corresponding bit of the current page of data (current MP).

After verifying at the 3 rd last level of the 8 levels (i.e., the 6 th level (LV5) has been verified), the binary code of table 1 may be updated as shown in table 2 below, where all data bits in LV0 through LV5 may be updated to 1 because they are no longer needed in the current programming operation. As shown in table 2, since b3 in the last two levels LV6 and LV7 is always 1, DC for storing the bit (current UP) of b3 may no longer be needed, and thus DC may be reused for caching the data bit of the next data page. For example, as shown in FIG. 10, after verification with LV5, the DC can be released to replace the current UP with the first bit of the next page of data (the next LP).

TABLE 2

LV 0 1 2 3 4 5 6 7
b1 1 1 1 1 1 1 0 1
b2 1 1 1 1 1 1 1 0
b3 1 1 1 1 1 1 1 1

After verifying at the 2 nd last level of the 8 levels (i.e., the 7 th level has been verified (LV6)), the binary code of table 2 may be updated as shown in table 3 below, where all data bits in LV6 may be updated to 1 because they are no longer needed in the current programming operation. As shown in table 3, since only b2 is 0 in the last level LV7, D1 for storing the bit of b1 (current LP) may not be needed any more, and thus D1 may be reused for caching the data bit of the next data page. For example, as shown in FIG. 10, after verification with LV6, D1 may be released to replace the current LP with the first bit of the next page of data (the next LP) so that the DC may be released again to cache the second bit of the next page of data (the next MP). That is, after verification with LV6, the next LP may be passed from DC to D1, and the next MP may be cached in DC.

TABLE 3

LV 0 1 2 3 4 5 6 7
b1 1 1 1 1 1 1 1 1
b2 1 1 1 1 1 1 1 0
b3 1 1 1 1 1 1 1 1

The method 1100 proceeds to operation 1116, where, as shown in FIG. 11, in operation 1116, a control is performed at 2NAfter the last of the levels is verified, one of the N bits of the next page of data is stored in the multi-purpose memory cell. For example, as shown in fig. 10, the DL may store a 3BL bias before verifying at the last level (i.e., the 8 th level (LV7)), and then store one bit of the next page of data (the next MP) after verifying at LV 7. In some implementations, after verifying at the last level, the 3BL bias may no longer be needed, for example, because the read margin and distribution width of the last level may be less critical than the other levels. Thus, the DL can be released to replace the 3BL bias with the second bit of the next data page (next MP) so that the DC can be released again to cache the third bit of the next data page (next UP). That is, after verification with LV7, the next MP may be passed from DC to DL, and the next UP may be cached in DC.

The method 1100 proceeds to operation 1118 where, as shown in fig. 11, a next selected row of the row of memory cells is programmed based on a next page of data in operation 1118. As shown in FIG. 10, since after verification with LV7, all 3 bits of the next page of data can be cached, (next LP, next MP, and next UP) the next page of data can become ready during the current program operation. Thus, at the end of the current programming operation, the next programming operation based on the next page of data can be seamlessly triggered without a data load window. For example, during the transition, the next MP may be passed from DL to D2 so that the 3 bits of the next page of data (next LP, next MP, and next UP) may be stored in D1, D2, and DC, respectively, for the next programming operation, and the DL may be released again to also store the 3BL bias for the next programming operation.

According to one aspect of the present disclosure, a memory device includes an array of memory cells in a plurality of columns and a plurality of rows, a plurality of word lines respectively coupled to the rows of memory cells, a plurality of bit lines respectively coupled to the columns of memory cells, and peripheral circuitry coupled to the array of memory cells through the bit lines and word lines and configured to program a selected one of the rows of memory cells based on a current page of data. Each memory cell is configured with 2NOne of the levels stores a segment of N bits of data, where N is an integer greater than 1. The peripheral circuit includes a plurality of page buffer circuits respectively coupled to the bit lines. Each page buffer circuit includes a cache memory unit, a multi-purpose memory unit, and N-1 data memory units. The cache storage unit is configured to sequentially receive the N bits of the current page of data and the N bits of the next page of data and to sequentially store one of the N bits of the current page of data and each of the N bits of the next page of data when programming the selected row based on the current page of data. The multi-purpose memory cell is configured to sequentially store non-data page information and one of the N bits of a next data page when programming a select row based on a current data page. The data storage cells are each configured to store a respective one of the N bits of the current page of data when programming the select row based on the current page of data.

In some implementations, the non-data page information includes a voltage level applied to a corresponding bit line.

In some embodiments, to program a select row based on a current page of data, the peripheral circuitry is configured to sequentially program 2 sN2 of one levelN1 level verify select row.

In some embodiments, the cache memory unit is configured to be at 2NOne of the N bits of the current page of data is stored and verified at 2 before the Nth from the last of the levels is verifiedNAfter verifying a corresponding one of the last N levels of the levels, sequentially storing N bits of a next page of dataEach bit.

In some embodiments, the multi-purpose storage unit is configured to operate at 2NStoring non-data page information before verifying at the last one of the levels, and verifying at 2NAfter the last of the levels is verified, one of the N bits of the next page of data is stored.

In some embodiments, at least one of the data storage units is configured to sequentially store a respective one of the N bits of the current page of data and a respective one of the N bits of the next page of data.

In some embodiments, one of the at least one of the data storage units is configured to be at 2NStoring a respective one of the N bits of the current page of data prior to verifying at a penultimate (N-1) one of the levels, and verifying at 2NAfter the verification of the penultimate (N-1) of the levels, a corresponding one of the N bits of the next page of data is stored.

In some embodiments, the peripheral circuitry further comprises a word line driver coupled to the word lines and configured to apply a programming voltage on a selected word line of the word lines coupled to the selected row and to sequentially apply 2 on the selected word lineN1 verification voltage, 2N1 verify voltages correspond to 2N2 of one levelN-1 level.

In some embodiments, each of the cache storage unit, the multipurpose storage unit, and the data storage unit includes a latch.

In some embodiments, the peripheral circuitry is further configured to program a next selected row of the row of memory cells based on a next page of data after programming the selected row based on the current page of data.

In some embodiments, the page buffer circuit includes one cache latch, two data latches, one 3-bit line latch, and one sense/program latch.

In some embodiments, one cache storage unit includes one cache latch, N-1 data storage units include two data latches, and a multi-purpose storage unit includes a 3-bit line latch.

In accordance with another aspect of the present disclosure, a system includes a memory device configured to store data and a memory controller coupled to the memory device and configured to control the memory device. The memory device includes an array of memory cells in a plurality of columns and a plurality of rows, a plurality of word lines respectively coupled to the rows of memory cells, a plurality of bit lines respectively coupled to the columns of memory cells, and peripheral circuitry coupled to the array of memory cells through the bit lines and word lines and configured to program a selected one of the rows of memory cells based on a current page of data. Each memory cell is configured with 2NOne of the levels stores a segment of N bits of data, where N is an integer greater than 1. The peripheral circuit includes a plurality of page buffer circuits respectively coupled to the bit lines. Each page buffer circuit includes a cache memory unit, a multi-purpose memory unit, and N-1 data memory units. The cache storage unit is configured to sequentially receive the N bits of the current page of data and the N bits of the next page of data and to sequentially store one of the N bits of the current page of data and each of the N bits of the next page of data when programming the selected row based on the current page of data. The multi-purpose memory cell is configured to sequentially store non-data page information and one of the N bits of a next data page when programming a select row based on a current data page. The data storage cells are each configured to store a respective one of the N bits of the current page of data when programming the select row based on the current page of data.

In some implementations, the non-data page information includes a voltage level applied to a corresponding bit line.

In some embodiments, to program a select row based on a current page of data, the peripheral circuitry is configured to sequentially program 2 sN2 of one levelN1 level verify select row.

In some embodiments, the cache memory unit is configured to be at 2NA levelBefore verifying at the nth last level, one of the N bits of the current page of data is stored and verified at 2NAfter a corresponding one of the last N levels of the levels is verified, each of the N bits of the next page of data is stored in turn.

In some embodiments, the multi-purpose storage unit is configured to operate at 2NStoring non-data page information before verifying at the last one of the levels, and verifying at 2NAfter the last of the levels is verified, one of the N bits of the next page of data is stored.

In some embodiments, at least one of the data storage units is configured to sequentially store a respective one of the N bits of the current page of data and a respective one of the N bits of the next page of data.

In some embodiments, one of the at least one of the data storage units is configured to be at 2NStoring a respective one of the N bits of the current page of data prior to verifying at a penultimate (N-1) one of the levels, and verifying at 2NAfter the verification of the penultimate (N-1) of the levels, a corresponding one of the N bits of the next page of data is stored.

In some embodiments, the peripheral circuitry further comprises a word line driver coupled to the word lines and configured to apply a programming voltage on a selected word line of the word lines coupled to the selected row and to sequentially apply 2 on the selected word lineN1 verification voltage, 2N1 verify voltages correspond to 2N2 of one levelN-1 level.

In some embodiments, each of the cache storage unit, the multipurpose storage unit, and the data storage unit includes a latch.

In some embodiments, the peripheral circuitry is further configured to program a next selected row of the row of memory cells based on a next page of data after programming the selected row based on the current page of data.

According to yet another aspect of the present disclosure, there is provided aA method for operating a memory device. The memory device includes a plurality of rows of memory cells. N bits of a current page of data are received. One of the N bits of the current page of data is stored in one cache memory location and a corresponding one of the N bits of the current page of data is stored in each of the N-1 data memory locations. The non-data page information is stored in a multi-purpose storage unit. A selected row of the row of memory cells is programmed based on the current page of data. The selected rows are verified in turn until 2NThe nth last level of the levels. The N bits of the next page of data are received. At 2NAfter verifying a corresponding one of the last N levels of the levels, each of the N bits of the next page of data is stored in the cache memory unit in turn. At 2NAfter the last of the levels is verified, one of the N bits of the next page of data is stored in the multi-purpose memory cell.

In some embodiments, at 2NA corresponding one of the N bits of the current page of data is stored in one of the data storage units before a last (N-1) one of the levels is verified. In some embodiments, at 2NAfter the verification of the last (N-1) of the levels, a corresponding one of the N bits of the next page of data is stored in the data storage unit.

In some implementations, the next selected row of the row of memory cells is programmed based on the next page of data.

In some implementations, the non-data page information includes a voltage level applied to a corresponding bit line.

In some embodiments, each of the cache storage unit, the multipurpose storage unit, and the data storage unit includes a latch.

The foregoing description of the specific embodiments may be readily modified and/or adapted for various applications. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein.

The breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

32页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:减少电荷注入误差的电路

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!