Page buffer circuit in three-dimensional memory device

文档序号:1866360 发布日期:2021-11-19 浏览:25次 中文

阅读说明:本技术 三维存储器装置中的页缓冲器电路 (Page buffer circuit in three-dimensional memory device ) 是由 陈腾 王砚 栗山正男 于 2021-06-29 设计创作,主要内容包括:本公开内容提供了3D NAND装置的页缓冲器电路。在一些实施例中,页缓冲器电路包括:第一位线段感测分支,连接到位线的第一位线段,以及第二位线段感测分支,连接到位线的第二位线段。第一位线段感测分支和第二位线段感测分支并联连接到页缓冲器电路的感测节点。在一些实施例中,第一位线段感测分支包括第一感测锁存器和第一位线预充电路径,并且第二位线段感测分支包括第二感测锁存器和第二位线预充电路径。(The present disclosure provides a page buffer circuit of a 3D NAND device. In some embodiments, the page buffer circuit includes: a first bitline segment sensing branch connected to the first bitline segment of the bitline, and a second bitline segment sensing branch connected to the second bitline segment of the bitline. The first bitline segment sensing branch and the second bitline segment sensing branch are connected in parallel to a sensing node of the page buffer circuit. In some embodiments, the first bitline segment sensing branch includes a first sensing latch and a first bitline precharge path, and the second bitline segment sensing branch includes a second sensing latch and a second bitline precharge path.)

1. A page buffer circuit of a memory device, comprising:

a first bitline segment sensing branch connected to a first bitline segment of a bitline; and

a second bitline segment sensing branch connected to a second bitline segment of the bitline;

wherein the first bitline segment sensing branch and the second bitline segment sensing branch are connected in parallel to a sensing node of the page buffer circuit.

2. The page buffer circuit according to claim 1, wherein:

the first bitline segment sensing branch comprises a first sensing latch and a first bitline precharge path; and is

The second bitline segment sensing branch includes a second sensing latch and a second bitline precharge path.

3. The page buffer circuit according to claim 1, wherein:

the first bitline segment sensing branch is connected to the sensing node through a first switch; and is

The second bitline segment sensing branch is connected to the sensing node through a second switch.

4. The page buffer circuit of claim 1, wherein the first bitline segment is aligned with the second bitline segment along a bitline direction.

5. The page buffer circuit of claim 1, wherein the first and second bitline segments are separately connected with the same memory cell string.

6. The page buffer circuit of claim 5, wherein the memory device is a three-dimensional NAND memory device and the string of memory cells is a vertical stack of memory cells string.

7. The page buffer circuit of claim 1, further comprising a low voltage latch and a cache latch.

8. The page buffer circuit of claim 7, wherein the first and second bitline segment sense branches are commonly connected to the low voltage latch and the cache latch.

9. The page buffer circuit according to claim 1, further comprising:

a third bitline segment sensing branch connected to a third bitline segment of the bitline;

wherein the first, second, and third bitline segment sensing branches are connected in parallel to the sense node of the page buffer circuit.

10. The page buffer circuit of claim 9, wherein the third bitline segment sensing branch comprises a third sense latch and a third bitline precharge path.

11. The page buffer circuit of claim 9, wherein the third bitline segment sensing branch is connected to the sensing node through a third switch.

12. The page buffer circuit of claim 9, wherein the first, second, and third bitline segments are aligned with one another along a bitline direction.

13. The page buffer circuit of claim 9, wherein the first, second, and third bitline segments are separately connected with the same memory cell string.

14. The page buffer circuit of claim 7, wherein the first, second, and third bitline segment sense branches are commonly connected to a low voltage latch and a cache latch.

15. A memory device, comprising:

a plurality of bit lines extending in parallel along a bit line direction, each bit line including at least two bit line segments; and

a plurality of page buffers, each page buffer corresponding to one of the plurality of bit lines;

wherein the at least two bit line segments of each bit line are commonly connected to the same corresponding page buffer.

16. The memory device according to claim 15, wherein each page buffer comprises:

a first bitline segment sensing branch connected to the first bitline segment; and

a second bitline segment sensing branch connected to the second bitline segment;

wherein the first bitline segment sensing branch and the second bitline segment sensing branch are connected in parallel to a sensing node of the page buffer circuit.

17. The memory device of claim 15, wherein:

the first bitline segment sensing branch comprises a first sensing latch and a first bitline precharge path; and is

The second bitline segment sensing branch includes a second sensing latch and a second bitline precharge path.

18. The memory device of claim 15, wherein:

the first bitline segment sensing branch is connected to the sensing node through a first switch; and is

The second bitline segment sensing branch is connected to the sensing node through a second switch.

19. The memory device of claim 15, further comprising:

a plurality of memory cell strings; wherein the first and second bit line segments are separately connected with the same memory cell string.

20. The memory device of claim 19, wherein the memory device is a three-dimensional NAND memory device and the plurality of strings of memory cells are vertical stacked strings of memory cells.

21. The memory device of claim 15, wherein each page buffer further comprises a low voltage latch and a cache latch.

22. The memory device of claim 21 wherein the first bitline segment sensing branch and the second bitline segment sensing branch are commonly connected to the low voltage latch and the cache latch.

23. The memory device according to claim 16, wherein each page buffer further comprises:

a third bitline segment sensing branch connected to the third bitline segment;

wherein the first, second, and third bitline segment sensing branches are connected in parallel to the sense node of the page buffer circuit.

24. The memory device of claim 23, wherein the third bitline segment sensing branch comprises a third sense latch and a third bitline precharge path.

25. The memory device of claim 23 wherein the third bitline segment sensing branch is connected to the sensing node through a third switch.

26. The memory device of claim 23, wherein the first, second, and third bitline segments are separately connected with the same string of memory cells.

27. The memory device of claim 23 wherein the first, second, and third bitline segment sense branches are commonly connected to a low voltage latch and a cache latch.

28. A method of performing a read operation by a memory device, comprising:

simultaneously performing a precharge operation, a setup operation, and a sensing operation on at least two bit line segments aligned with each other along a bit line direction through at least two bit line segment sensing branches in a page buffer circuit;

wherein the at least two bit line segments are respectively connected to the at least two bit line segment sensing branches in the same page buffer circuit.

29. A memory system, comprising:

a memory device, comprising:

a plurality of bit lines extending in parallel along a bit line direction, each bit line including at least two bit line segments, an

A plurality of page buffers, each page buffer corresponding to one of the plurality of bit lines;

wherein the at least two bit line segments of each bit line are commonly connected to the same corresponding page buffer; and

a memory controller configured to simultaneously perform a precharge operation, a setup operation, and a sensing operation on at least two bit line segments of a corresponding bit line through at least two bit line segment sensing branches in one page buffer circuit.

Technical Field

The present disclosure relates generally to the field of semiconductor technology and, more particularly, to page buffer circuits in three-dimensional (3D) memories.

Background

As memory devices shrink to smaller die sizes to reduce manufacturing costs and increase memory density, the shrinking of planar memory cells presents challenges due to process technology limitations and reliability issues. Three-dimensional (3D) memory architectures can address density and performance limitations in planar memory cells. In a 3D NAND memory, one chip may include a plurality of dies that can independently perform NAND operations (e.g., read, write, and erase). Each die may include a plurality of memory planes, and each memory plane may include a plurality of blocks, each block including a plurality of memory cells stacked vertically to increase storage capacity per unit area, wherein the memory cells may be addressed from a shared word line. A page buffer circuit may be arranged for each bit line to perform a sensing operation and a data transfer operation.

Disclosure of Invention

Embodiments of three-dimensional (3D) memory devices are described in this disclosure.

One aspect of the present disclosure provides a page buffer circuit of a memory device, including: a first bitline segment sensing branch connected to a first bitline segment of a bitline; and a second bitline segment sensing branch connected to a second bitline segment of the bitline; wherein the first bitline segment sensing branch and the second bitline segment sensing branch are connected in parallel to a sensing node of the page buffer circuit.

In some embodiments, the first bitline segment sensing branch comprises a first sensing latch and a first bitline precharge path; and the second bitline segment sensing branch includes a second sensing latch and a second bitline precharge path.

In some embodiments, the first bitline segment sensing branch is connected to the sensing node through a first switch; and the second bitline segment sensing branch is connected to the sensing node through a second switch.

In some embodiments, the first bitline segment is aligned with the second bitline segment along the bitline direction.

In some embodiments, the first bitline segment and the second bitline segment are separately connected with the same memory cell string.

In some embodiments, the memory device is a three-dimensional NAND memory device and the memory cell strings are vertical memory cell stacked strings.

In some embodiments, the page buffer circuit further includes a low voltage latch and a cache latch.

In some embodiments, the first bitline segment sensing branch and the second bitline segment sensing branch are commonly connected to a low voltage latch and a cache latch.

In some embodiments, the page buffer circuit further includes: a third bitline segment sensing branch connected to a third bitline segment of the bitline; wherein the first, second, and third bitline segment sensing branches are connected in parallel to a sensing node of the page buffer circuit.

In some embodiments, the third bitline segment sensing branch includes a third sensing latch and a third bitline precharge path.

In some embodiments, the third bitline segment sensing branch is connected to the sensing node through a third switch.

In some embodiments, the first, second, and third bitline segments are aligned with one another along the bitline direction.

In some embodiments, the first bitline segment, the second bitline segment, and the third bitline segment are separately connected with the same memory cell string.

In some embodiments, the first, second, and third bitline segment sensing branches are commonly connected to a low voltage latch and a cache latch.

Another aspect of the present disclosure provides a memory device including: a plurality of bit lines extending in parallel along a bit line direction, each bit line including at least two bit line segments; and a plurality of page buffers, each page buffer corresponding to one of the plurality of bit lines; wherein at least two bit line segments of each bit line are commonly connected to the same corresponding page buffer.

In some embodiments, each page buffer includes: a first bitline segment sensing branch connected to the first bitline segment; and a second bitline segment sensing branch connected to the second bitline segment; wherein the first bitline segment sensing branch and the second bitline segment sensing branch are connected in parallel to a sensing node of the page buffer circuit.

Another aspect of the present disclosure provides a method of performing a read operation by a memory device, comprising: simultaneously performing a precharge operation, a setup operation (develoop operation), and a sensing operation on at least two bit line segments aligned with each other along a bit line direction through at least two bit line segment sensing branches in a page buffer circuit; wherein, at least two bit line segments are respectively connected to at least two bit line segment sensing branches in the same page buffer circuit.

Another aspect of the present disclosure provides a memory system including: a memory device, comprising: a plurality of bit lines extending in parallel along a bit line direction, each bit line including at least two bit line segments and a plurality of page buffers, each page buffer corresponding to one of the plurality of bit lines, wherein the at least two bit line segments of each bit line are commonly connected to the same corresponding page buffer; and a memory controller configured to simultaneously perform a precharge operation, a setup operation, and a sensing operation on at least two bit line segments of a corresponding bit line through at least two bit line segment sensing branches in one page buffer circuit.

Other aspects of the disclosure will become apparent to those skilled in the art from the description, claims and drawings of the disclosure.

Drawings

The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate embodiments of the present disclosure and, together with the description, further serve to explain the principles of the disclosure and to enable a person skilled in the pertinent art to make and use the disclosure.

FIG. 1A illustrates a block diagram of an exemplary system having a memory device, in accordance with some embodiments.

FIG. 1B illustrates a diagram of an exemplary memory card with a memory device, according to some embodiments.

Fig. 1C illustrates a diagram of an exemplary Solid State Drive (SSD) with memory, according to some embodiments.

FIG. 2 illustrates a schematic block diagram of an example hardware module configuration of a memory system in accordance with some embodiments.

Fig. 3 illustrates a schematic circuit diagram of an example memory device including peripheral circuitry in accordance with some aspects of the present disclosure.

FIG. 4A illustrates a perspective view of a portion of an exemplary three-dimensional (3D) memory array structure, in accordance with some embodiments.

FIG. 4B illustrates a schematic diagram of an example 3D memory device in plan view, according to some embodiments.

FIG. 5 illustrates a schematic diagram of an example memory block and corresponding page buffer of a 3D NAND device, according to some embodiments.

Fig. 6A illustrates a schematic block diagram of an example page buffer of a 3D NAND device, in accordance with some embodiments.

6B-6C illustrate schematic logic circuit diagrams of example page buffers of a 3D NAND device according to some embodiments.

FIG. 7 illustrates a schematic block diagram of an example page buffer operational timing sequence for a read operation, in accordance with some embodiments.

The features and advantages of the present invention will become more apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify correspondingly throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.

Embodiments of the present disclosure will be described with reference to the accompanying drawings.

Detailed Description

While specific configurations and arrangements are discussed, it should be understood that this is done for illustrative purposes only. A person skilled in the relevant art will recognize that other configurations and arrangements can be used without parting from the spirit and scope of the disclosure. It will be apparent to those skilled in the relevant art that the present disclosure may also be used in a variety of other applications.

It should be noted that references in the specification to "one embodiment," "an example embodiment," "some embodiments," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the relevant art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.

Generally, terms may be understood at least in part from their usage in context. For example, the term "one or more" as used herein may be used to describe any feature, structure, or characteristic in the singular or may be used to describe a feature, structure, or combination of features in the plural, depending, at least in part, on the context. Similarly, terms such as "a," "an," or "the" may also be understood to convey singular or plural usage, depending, at least in part, on the context. Additionally, the term "based on" may be understood as not necessarily intended to convey an exclusive set of factors, but may allow for the presence of other factors not necessarily expressly described, again depending at least in part on the context.

It should be readily understood that the meaning of "on … …," "over … …," and "over … …" in this disclosure should be interpreted in the broadest way such that "on … …" means not only "directly on something," but also includes the meaning of "on something" with intervening features or layers therebetween. Further, "above … …" or "above … …" not only means "above something" or "above something", but may also include the meaning of "above something" or "above something" without intervening features or layers therebetween (i.e., directly on something).

Further, spatially relative terms, such as "below … …," "below … …," "below," "over … …," "over," and the like, may be used herein to describe one element or feature's relationship to another element(s) or feature as illustrated for ease of description. Spatially relative terms are intended to encompass different orientations of the device in use or steps of operation in addition to the orientation depicted in the figures. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.

As used herein, the term "substrate" refers to a material on which a subsequent layer of material is added. The substrate includes a "top" surface and a "bottom" surface. The top surface of the substrate is typically where the semiconductor devices are formed, and thus, unless otherwise specified, the semiconductor devices are formed on the top side of the substrate. The bottom surface is opposite the top surface, and thus the bottom side of the substrate is opposite the top side of the substrate. The substrate itself may be patterned. The material added on top of the substrate may be patterned or may remain unpatterned. In addition, the substrate may include a variety of semiconductor materials, such as silicon, germanium, gallium arsenide, indium phosphide, and the like. Alternatively, the substrate may be made of a non-conductive material, such as glass, plastic, or sapphire wafers.

As used herein, the term "layer" refers to a portion of material that includes a region having a thickness. The layer has a top side and a bottom side, wherein the bottom side of the layer is relatively close to the substrate and the top side is relatively far from the substrate. The layer may extend over the entire underlying or overlying structure, or may have an extent less than the extent of the underlying or overlying structure. Furthermore, the layer may be a region of uniform or non-uniform continuous structure having a thickness less than the thickness of the continuous structure. For example, a layer may be located between any set of horizontal planes at the top and bottom surfaces or between the top and bottom surfaces of the continuous structure. The layers may extend horizontally, vertically and/or along a tapered surface. The substrate may be a layer, which may include one or more layers therein, and/or may have one or more layers thereon, above, and/or below. The layer may comprise a plurality of layers. For example, the interconnect layer may include one or more conductive and contact layers in which contacts, interconnect lines, and/or Vertical Interconnect Access (VIA) layers are formed, and one or more dielectric layers.

In the present disclosure, for convenience of description, the "stage" (tier) is used to refer to an element having substantially the same height in the vertical direction. For example, a word line and an underlying gate dielectric layer may be referred to as "one level," a word line and an underlying insulating layer may be collectively referred to as "one level," a word line having substantially the same height may be referred to as "one word line level" or the like, and so on.

As used herein, the term "nominal" refers to a desired or target value, and a range of values above and/or below the desired value, of a characteristic or parameter of a component or process step that is set during a design phase of a product or process. The range of values may be due to slight variations in manufacturing processes or tolerances. As used herein, the term "about" means that the value of a given quantity can vary based on the particular technology node associated with the subject semiconductor device. The term "about" can mean that a given amount of a value varies, for example, within 10-30% of the value (e.g., ± 10%, ± 20% or ± 30% of the value), based on the particular technology node.

In the present disclosure, the term "horizontal/horizontally/laterally" means a lateral surface nominally parallel to the substrate, while the term "vertical" or "vertically" means a lateral surface nominally perpendicular to the substrate.

As used herein, the term "3D memory" refers to a three-dimensional (3D) semiconductor device having vertically oriented strings of memory cell transistors (referred to herein as "memory strings," e.g., NAND strings) on a laterally oriented substrate such that the memory strings extend in a vertical direction relative to the substrate.

Fig. 1A illustrates a block diagram of an example system 100 having a memory device in accordance with some aspects of the present disclosure. System 100 may be a mobile phone, desktop computer, laptop computer, tablet computer, vehicle computer, game console, printer, positioning device, wearable electronic device, smart sensor, Virtual Reality (VR) device, Augmented Reality (AR) device, or any other suitable electronic device having a storage device therein. As shown in fig. 1A, the system 100 may include a host 108 and a memory system 102 having one or more memory devices 104 and a memory controller 106. Host 108 may be a processor of an electronic device, such as a Central Processing Unit (CPU), or a system on a chip (SoC), such as an Application Processor (AP). Host 108 may be configured to send data to memory device 104 or receive data from memory device 104.

Memory device 104 may be any memory device disclosed herein, such as a NAND flash memory device. Consistent with the scope of the present disclosure, memory controller 106 may control multi-pass programming of memory device 104 such that NGS operations are enabled for all memory cells (even those that pass a respective verify operation) in a non-final programming pass of the multi-pass programming. Peripheral circuits (e.g., word line drivers) may apply a low voltage (e.g., a Ground (GND) voltage) to the DSG of each memory string coupled to the selected word line, and may apply a low or negative voltage to the selected word line to enable NGS operations on all memory cells coupled to the selected word line during a non-final programming pass.

According to some embodiments, the memory controller 106 is coupled to the memory devices 104 and the host 108, and is configured to control the memory devices 104. Memory controller 106 may manage data stored in memory devices 104 and communicate with host 108. In some implementations, the memory controller 106 is designed to operate in a low duty cycle environment, such as a Secure Digital (SD) card, Compact Flash (CF) card, Universal Serial Bus (USB) flash drive, or other medium for use in electronic devices such as personal computers, digital cameras, mobile phones, and the like. In some implementations, the memory controller 106 is designed to operate in a high duty cycle environment SSD or embedded multimedia card (eMMC) that serves as a data storage device for mobile devices, such as smart phones, tablets, laptops, etc., as well as enterprise storage arrays. The memory controller 106 may be configured to control operations of the memory device 104, such as read, erase, and program operations. The memory controller 106 may also be configured to manage various functions for data stored or to be stored in the memory devices 104, including (but not limited to) bad block management, garbage collection, logical to physical address translation, wear leveling, and the like. In some implementations, the memory controller 106 is also configured to process Error Correction Codes (ECC) for data read from the memory device 104 or written to the memory device 104. Any other suitable function may also be performed by the memory controller 106, such as programming the memory device 104. The memory controller 106 may communicate with external devices (e.g., the host 108) according to a particular communication protocol. For example, the memory controller 106 may communicate with the external device through at least one of various interface protocols, such as a USB protocol, an MMC protocol, a Peripheral Component Interconnect (PCI) protocol, a PCI-Express (PCI-E) protocol, an Advanced Technology Attachment (ATA) protocol, a serial-ATA protocol, a parallel-ATA protocol, a Small Computer System Interface (SCSI) protocol, an Enhanced Small Disk Interface (ESDI) protocol, an Integrated Drive Electronics (IDE) protocol, a firewire protocol, and so forth.

The memory controller 106 and the one or more memory devices 104 may be integrated into various types of storage devices, for example, included in the same package (e.g., a universal flash memory (UFS) package or an eMMC package). That is, the memory system 102 may be implemented and packaged into different types of end electronic products. In one example as shown in FIG. 1B, the memory controller 106 and the single memory device 104 may be integrated into the memory card 112. The memory card 112 may include a PC card (PCMCIA), a CF card, a Smart Media (SM) card, a memory stick, a multimedia card (MMC, RS-MMC, MMCmicro), an SD card (SD, miniSD, microSD, SDHC), UFS, and the like. The memory card 112 may further include a memory card connector 114 that couples the memory card 112 with a host (e.g., host 108 in FIG. 1A). In another example as shown in fig. 1C, the memory controller 106 and the plurality of memory devices 104 may be integrated into the SSD 116. SSD 116 can also include SSD connector 118 that couples SSD 116 with a host (e.g., host 108 in fig. 1A). In some implementations, the storage capacity and/or operating speed of the SSD 116 is greater than the storage capacity and/or operating speed of the memory card 112.

Fig. 2 shows a diagram of an exemplary memory device 104 (e.g., a NAND flash memory), the memory device 104 having an array of memory cells 202 and peripheral circuitry including a page buffer 204, column decoder/bit line drivers 206, row decoder/word line drivers 208, a voltage generator 210, a control logic unit 212, a register 214, and an interface 216. Fig. 3 shows a schematic circuit diagram of an exemplary memory device 104 that includes a memory cell array 202 and peripheral circuitry 302 coupled to the memory cell array 202. For ease of illustration, some of the components in fig. 2 and 3 are described together. Peripheral circuitry 302 may include page buffer 204, column decoder/bit line drivers 206, row decoder/word line drivers 208, voltage generator 210, control logic 212, registers 214, and interface 216 in fig. 2. It should be understood that in some examples, additional peripheral circuitry may also be included.

In some embodiments, voltage generator 210 may include a plurality of charge pumps and linear regulators. In some embodiments, the memory cell array may include a plurality of planes. In some embodiments, the NAND die may be divided into four planes (i.e., plane 0, plane 1, plane 2, and plane 3) or fewer or more than four planes (e.g., 1, 2, 6, 8 planes, etc.). A plane includes a plurality of memory cells that may be grouped into memory blocks. The memory block is typically the smallest erasable entity in a NAND flash memory die. In one example, a memory block includes a plurality of cells coupled to the same bit line. The memory blocks include one or more pages of cells. The size of the page may vary depending on the implementation. In one example, the page has a size of 16 kB. Page sizes less than or greater than 16kB are also possible (e.g., 512B, 2kB, 4kB, etc.).

In some embodiments, the row decoder/word line driver 208 may select one of the memory blocks in the memory cell array 202 in response to an Address (ADD). The row decoder/word line driver 208 may select one of the word lines of the selected memory block in response to an address ADD. The row decoder/word line driver 208 may transfer a voltage corresponding to the mode of operation to the word line of the selected memory block. During a program operation, the row decoder/word line driver 208 may transfer a program voltage and a verify voltage to a selected word line and a pass voltage (pass voltage) to unselected word lines. During a read operation, row decoder/word line driver 208 may pass a select read voltage to a selected word line and a non-select read voltage to unselected word lines.

NAND memory devices are capable of performing read operations on one plane at a time. Such NAND memory devices have a single state machine for the entire die. If one plane is read, the other planes are free. Therefore, such reading (referred to as single-plane reading) does not utilize all planes at the same time. Lack of concurrency results in high latency due to, for example, read "blocking" after other reads.

Another type of operation is a multi-plane operation (e.g., a four-plane read where the read is performed on four planes at a time). For multi-plane operations, there are a number of limitations on commands. For array commands, the array operations must be the same (e.g., program, erase, or read, but not combined), and the page types for those array operations must also be the same. The voltage biases to access different page types (e.g., lower page, upper page, etc.) are different, and a single state machine on the die applies the same voltage bias to all planes. For random workloads, it is difficult for read commands to meet this requirement. For random workloads, the probability of receiving reads of the same page type on all four planes is low. Thus, for random workloads, the improvement in read latency with four-plane reads is minimal. Therefore, this feature is not typically used for random read workloads, which are typically considered as critical workloads of SSDs (solid state drives).

Another solution attempted was to combine reads of different page types on different planes into a single command. However, all of these reads are handled by the NAND as a single command, meaning that there is a single start and finish for the read. Thus, with this technique, the read duration is governed by the worst (e.g., slowest) page type, and asynchronous reading is not possible. Therefore, combining different page types on different planes into a single command also results in minimal increases in performance and quality of service (QoS).

Compared to conventional NAND operation, independent multi-plane operation enables independent and concurrent operation per plane. A separate state machine for each plane can apply different bias voltages for each plane to service requests independently and concurrently. All NAND array commands are allowed independently on the plane level, achieving significant performance improvements. An array command is a command that causes an array operation, such as programming data to the array, reading data from the array, erasing a block, or other operation for the array.

In one example, each plane may receive and service different array commands (e.g., read commands, program commands, erase commands, etc.) and may send and complete the commands at different times. Non-array commands (e.g., reset commands, timing mode change commands, etc.) may remain die-level commands. In an alternative example, the read operation is allowed independently on a plane level. Other operations, such as program commands and erase commands, are die-level operations. Furthermore, some read support commands, such as read column enhanced (read column) and read status, may also be plane level commands.

As shown in fig. 3, the memory cell array 202 may be a NAND flash memory cell array, in which memory cells 306 are provided in the form of an array of NAND memory strings 308 each extending vertically above a substrate (not shown). In some implementations, each NAND memory string 308 includes multiple memory cells 306 coupled in series and stacked vertically. Each memory cell 306 may hold a continuous analog value, such as a voltage or charge, depending on the number of electrons trapped within the area of the memory cell 306. Each memory cell 306 may be a floating gate type memory cell including a floating gate transistor or a charge trapping type memory cell including a charge trapping transistor. In one example, memory cell 306 includes a transistor having a replacement gate. Memory cells 306 having a replacement gate typically have a low resistance gate (e.g., a tungsten gate) and a charge trapping layer between the gate and the channel where charge is trapped or stored to represent one or more bit values. In another example, the memory cell 306 may include a transistor having a floating gate (e.g., a high resistance polysilicon gate) that stores charge indicative of one or more bit values. Other architectures are possible.

In some implementations, each memory cell 306 is a single-layer cell (SLC) that has two possible memory states and thus can store one bit of data. For example, a first memory state "0" may correspond to a first voltage range, while a second memory state "1" may correspond to a second voltage range. In some implementations, each memory cell 306 is a multi-level cell (MLC) capable of storing more than a single bit of data in more than four memory states. For example, MLCs can store two bits per cell, three bits per cell (also known as Triple Layer Cells (TLC)), or four bits per cell (also known as Quadruple Layer Cells (QLC)). Each MLC may be programmed to assume a range of possible nominal stored values. In one example, if each MLC stores two bits of data, the MLC may be programmed to assume one of three possible programming levels (programming levels) from the erased state by writing one of three possible nominal storage values to the cell. The fourth nominal storage value may be used for the erased state.

As shown in fig. 3, each NAND memory string 308 may include a Source Select Gate (SSG)310 at its source end and a Drain Select Gate (DSG)312 at its drain end. The SSGs 310 and 312 are gate electrodes of the SSG transistors and the DSG transistors, respectively, and may be configured to activate selected NAND memory strings 308 (columns of the array) during read and program operations. In some implementations, for example, SSGs 310 of NAND memory strings 308 in the same block 304 are coupled to ground through the same Source Line (SL)314 (e.g., a common SL). According to some embodiments, the DSG 312 of each NAND memory string 308 is coupled to a respective bit line 316 from which data can be read via an output bus (not shown). In some implementations, each NAND memory string 308 is configured to be selected or deselected by applying a select voltage (e.g., higher than the threshold voltage of the transistor having the DSG 312) or a deselect voltage (e.g., 0V) to the respective DSG 312 via one or more DSG lines 313 and/or by applying a select voltage (e.g., higher than the threshold voltage of the transistor having the SSG 310) or a deselect voltage (e.g., 0V) to the respective SSG 310 via one or more SSG lines 315.

As shown in FIG. 3, the NAND memory strings 308 may be organized into a plurality of blocks 304, each of which may have a common source line 314. In some implementations, each block 304 is the basic unit of data for an erase operation, i.e., all memory cells 306 on the same block 304 are erased at the same time. The memory cells 306 of adjacent NAND memory strings 308 may be coupled by a word line 318 that selects which row of memory cells 306 is affected by read and program operations. In some embodiments, each word line 318 is coupled to a page 320 of memory cells 306, which is the basic unit of data for a programming operation. The size of a page 320 in bits may correspond to the number of NAND memory strings 308 coupled by word lines 318 in one block 304. Each word line 318 may include a plurality of control gates (gate electrodes) at each memory cell 306 in a respective page 320 and a gate line coupled to the control gates. In some cases, dummy word lines that do not contain user data may also be used in the memory array adjacent to the select gate transistors. Such dummy word lines may shield the edge data word lines from certain edge effects.

Peripheral circuitry 302 may be coupled to memory cell array 202 by bit line 316, word line 318, source line 314, SSG line 315, and DSG line 313. The peripheral circuitry 302 may apply voltages on the bit line 316, the word line 318, the source line 314, the SSG lines 315, and the DSG lines 313 to perform multi-pass programming including the proposed NGS scheme in a non-final programming pass. As described above, peripheral circuitry 302 may include any suitable circuitry to facilitate operation of memory cell array 202 by applying and sensing voltage signals and/or current signals to and from each target memory cell 306 via bit line 316 via word line 318, source line 314, SSG line 315, and DSG line 313. The peripheral circuit 302 may include various types of peripheral circuits formed using MOS technology.

In some embodiments, the peripheral circuitry 302 may include a page buffer 204 as shown in fig. 2. The page buffer 204 is connected to the memory cell array 202 through a bit line 316, and is configured to store sensing data of the memory cell array 202 in a sensing operation. Page buffer 204 may include a plurality of latch circuits 324 each configured to sense data from a selected one of memory cells 306 via bit line 316. Latch circuits 324 are each configured to perform multiple read operations to determine one data state. The latch circuits 324 are each configured to store the result of a read operation. The page buffer 204 is controlled by the control logic unit 202 such that the latch circuits 324 sequentially and respectively store the results of the read operation to compare the data stored in the latch circuits with each other and select one latch circuit among the latch circuits 324 based on the comparison result.

The programming sequence for a group of memory cells 306 may include programming all desired pages into the group of memory cells 306. The programming sequence may include one or more programming passes. A programming pass (which may include one or more programming cycles) may program one or more pages. A programming pass may include applying one or more valid programming voltages to the cells to be programmed and then applying one or more verify voltages to the cells in order to determine which cells have completed programming (subsequent programming passes typically do not apply valid programming voltages and/or verify voltages to cells that have completed programming). Applying the effective programming voltage to the cell may include changing a voltage difference between a control gate and a channel of the cell in order to change a threshold voltage of the cell. Thus, the voltage of the word line (coupled to the control gate of the target cell) and/or the channel of the cell can be set in order to achieve application of the effective programming voltage. Since a program voltage is typically used to refer to a voltage applied to a word line, the effective program voltage may be the voltage difference between the control gate and the channel of the cell (in the case where the channel is held at 0V, the voltage difference may be synonymous with the program voltage).

Fig. 4A illustrates a perspective view of a portion of an exemplary three-dimensional (3D) memory cell array structure 400, according to some embodiments. The memory cell array structure 400 includes a substrate 430, an insulating film 431 over the substrate 430, a one-level Bottom Select Gate (BSG)432 over the insulating film 431, and a multi-level control gate 433 (also referred to as a "word line" (WL)) stacked on top of the BSG 432 to form a film stack 435 of alternating conductive and dielectric layers. For clarity, the dielectric layers adjacent to the various levels of control gates are not shown in fig. 4.

The control gates of each level are separated by a slit structure 416-1 and 416-2 through the film stack 435. The memory cell array structure 400 also includes a primary Top Select Gate (TSG)434 located over the stack of control gates 433. The stack of TSG 434, control gate 433 and BSG 432 is also referred to as a "gate electrode". The memory cell array structure 400 further includes doped source line regions 444 in portions of the memory strings 412 and the substrate 430 between adjacent BSGs 432. Each memory string 412 includes a channel hole 436 extending through the insulating film 431 and the film stack 435 of alternating conductive and dielectric layers. The memory string 412 also includes a memory film 437 on the sidewalls of the channel hole 436, a channel layer 438 over the memory film 437, and a core fill film 439 surrounded by the channel layer 438. A memory cell 440 may be formed at the intersection of the control gate 433 and the memory string 412. The portion of channel layer 438 under control gate 433 is also referred to as the channel of memory cell 440. The memory cell array structure 400 also includes a plurality of Bit Lines (BL)441 connected with the memory strings 412 above the TSG 434. The memory cell array structure 400 also includes a plurality of metal interconnect lines 443 connected to the gate electrodes through the plurality of contact structures 414. The edges of the film stack 435 are configured in a stepped shape to allow electrical connection to the gate electrode of each level.

In FIG. 4A, three levels of control gates 433-1, 433-2, and 433-3, as well as a level of TSG 434 and a level of BSG 432 are shown for illustrative purposes. In this example, each memory string 412 may include three memory cells 440-1, 440-2, and 440-3, which correspond to control gates 433-1, 433-2, and 433-3, respectively. The number of control gates and the number of memory cells may be greater than three to increase storage capacity. The memory cell array structure 400 may also include other structures, such as TSG cut structures, common source contacts, dummy memory strings, and the like. For simplicity, these structures are not shown in fig. 4A.

Fig. 4B illustrates a schematic diagram of an example 3D memory device 450 in plan view, according to some embodiments of the present disclosure. The 3D memory device 450 may include multiple channel structure regions, such as memory planes, memory blocks, memory fingers, and the like. Alternatively, the 3D memory device 450 may include one or more Through Array Contact (TAC) structures formed between two adjacent channel structure regions. In some embodiments, as shown in fig. 4B, the 3D memory device 450 may include four or more memory planes 460, each of which may include a plurality of memory blocks 465. It should be noted that the arrangement of the memory planes 460 in the 3D memory device 450 and the arrangement of the memory blocks 465 in each memory plane 460 shown in fig. 4B are merely examples, which do not limit the scope of the present disclosure.

The TAC structures may include one or more Bit Line (BL) TAC regions 471 sandwiched between two adjacent memory blocks 465 in a bit line direction (labeled "BL" in the figures) of the 3D memory device and extending along a word line direction (labeled "WL" in the figures) of the 3D memory device, one or more Word Line (WL) TAC regions 473 sandwiched between two adjacent memory blocks 465 in the word line direction (WL) and extending along the bit line direction (BL), and one or more ladder structure (SS) TAC regions 480 located at edges of each memory plane 460.

In some embodiments, the 3D memory device 450 may include a plurality of contact pads 490 arranged in a line at an edge of the 3D memory device 450. The interconnect contacts may be used to electrically interconnect the 3D memory device 450 to any suitable device and/or interface that provides drive power, receives control signals, transmits response signals, and the like.

FIG. 5 illustrates a schematic diagram of an example memory block and corresponding page buffer of a 3D NAND device, according to some embodiments.

As shown in fig. 5, in each memory block 500, a plurality of word lines 51-1 to 51-m extending in the Word Line (WL) direction may be arranged parallel to each other to be distributed along the Bit Line (BL) direction. Each of the plurality of word lines 51-1 through 51-m may be connected to a corresponding row of memory cells 306 of an adjacent NAND memory string 308 (see FIG. 3).

Each memory block 500 may further include a plurality of bit lines (e.g., 52-1 to 52-n) extending in the BL direction and arranged in parallel with each other to be distributed along the WL direction. Along the BL direction, each NAND memory string (e.g., NAND memory string 308 as described above with reference to FIG. 3) can be coupled to two or more bit line segments (e.g., 52-1A and 52-1B, 52-3A and 52-3B, 52-nA and 52-nB, etc.) that are not directly connected to each other. Note that FIG. 5 shows an example embodiment in which each bit line of a corresponding NAND memory string includes two bit line segments. In some other embodiments, each bit line may include more than two bit line segments that are not directly connected to each other. A bit line segment corresponding to a certain NAND memory string may be connected to a corresponding page buffer. For example, as shown in FIG. 5, the bit line segments 52-1A and 52-1B are commonly connected to the page buffer 53-1, the bit line segments 52-3A and 52-3B are commonly connected to the page buffer 53-3, and the bit line segments 52-nA and 52-nB are commonly connected to the page buffer 53-n.

As described above, the page buffers 53-1 to 53-n may operate as write drivers or sense amplifiers to perform a sensing operation ("SO", also referred to as "sense output") a plurality of times to select data among data included in the sensing result and output the data in order to determine a specific data state in the device and perform a read operation on the selected memory cell during different setup periods. Specifically, during a program operation, each page buffer may transfer a bit line voltage corresponding to data to be programmed to a corresponding bit line segment of the memory cell array. During a read operation or a sensing operation, each page buffer may sense data stored in a selected memory cell through a corresponding bit line segment.

Note that in some prior designs, each NAND memory string corresponds to one bit line that is not divided into two or more bit line segments as shown in FIG. 5. In this prior design, one block of data is used as a cell in a programming operation or a read operation, which means that each operation is performed to read data block by block. In order to read multiple blocks of data simultaneously to save operating time, in some other existing designs, each bit line may be divided into two or more bit line segments, and each bit line segment is connected to a separate buffer page. However, this prior design increases the number of page buffers, thereby increasing the chip area.

Unlike prior designs, the present disclosure provides page buffers 53-1 to 53-n, each corresponding to two or more bit line segments. Each of the page buffers 53-1 to 53-n can simultaneously read two or more data blocks, respectively. That is, in order to determine the data state stored in one of the memory cells selected according to the control of the control logic unit 202, each of the page buffers 53-1 to 53-n may simultaneously perform a plurality of sensing operations.

Fig. 6A illustrates a schematic block diagram of an example page buffer of a 3D NAND device, in accordance with some embodiments. Fig. 6B and 6C show schematic logic circuit diagrams of example page buffers of a 3D NAND device, according to some embodiments.

As shown in fig. 6A, the page buffer 600-1 may include cache latches (C-latches) 610, Low Voltage (LVT) latches (L-latches) 620, and at least two bit line segment sense branches 630, 640 connected in parallel with the L-latches 620. Each of the two bit line segment sense branches 630 and 640 may be connected to a corresponding bit line segment (e.g., 52-1A, 52-1B, etc., as shown in FIG. 5). In some embodiments, each bit line segment sense branch (e.g., 630, 640) may include a sense latch (S-latch, e.g., 633, 643) and a bit line precharge path (e.g., 631, 641). In some other embodiments not shown in fig. 6A, when the page buffer is coupled to drive more than two bit line segments, the page buffer may include more than two bit line segment sensing branches connected in parallel with the L-latch 620, with each bit line segment sensing branch corresponding to a separate bit line segment.

As shown in fig. 6B and 6C, a detailed circuit diagram of an exemplary page buffer 600 is shown. Note that the first section of the page buffer circuit 600-1 is connected to the second section of the page buffer circuit 600-2 through the sensing node SO and the command signal node COM _ S.

Referring to FIG. 6B, the first bitline segment sensing branch 630 is connected to the sensing node SO through a first switch 691, and the second sensing branch 640 is connected to the sensing node SO through a second switch 692. Each bitline segment sensing branch (e.g., 630, 640) may include a separate bitline precharge path (e.g., 631, 641) configured to precharge the corresponding bitline segment. Each bit line segment sense branch (e.g., 630, 640) may further include a separate S-latch (e.g., 633, 643) configured to sense the established state of the sense node SO. Each bit line segment sensing branch (e.g., 630, 640) may further include a separate bit line voltage supply and selection circuit (e.g., 635, 645) configured to supply a bit line voltage to the corresponding bit line segment and select the corresponding bit line segment for program and read operations. Referring to fig. 6C, note that one or more additional latch circuits (not shown) may be connected between the C-latch 610 and the L-latch 620. It should also be noted that although not shown in the figures, each bit line may be divided into a plurality of (e.g., 3, 4, or more) bit line segments along the bit line direction. In this case, the page buffer may include the same number of sensing branches corresponding to one bit line segment.

Referring to fig. 7, a schematic block diagram of an example page buffer operational timing sequence for a read operation is shown, in accordance with some embodiments. Since the first bitline segment sensing branch 630 and the second bitline segment sensing branch 640 are connected in parallel to the sensing node SO, the precharge operation, the SO setup operation, and the SO sensing operation of the two bitline segments can be performed in parallel.

As shown in fig. 7, during a first period 710, a precharge operation may be performed by the page buffer to precharge the SO and the first and second bitline segments simultaneously. For example, the first and second bit line segments 52-1A and 52-1B and the sensing node SO connected to the two parallel bit line precharge paths 631 and 641 in the page buffer 53-1, respectively, may be simultaneously precharged to a certain level during the first period 710.

Similarly, during the second period (setup time) 720 and the third period (sensing time) 730, the page buffer may perform the setup operation and the sensing operation on the first and second bit line segments simultaneously. For example, with each bit line segment connection, the voltage of the sense node SO may be controlled during the second time period (setup time) 720 based on the corresponding bit line segment connection control signal and the sense node voltage control signal. In addition, the page buffer may determine a logic level of the sensing node SO to store sensing data provided by sensing a voltage level of the sensing node SO during a third time period (sensing time) 730 at the two parallel S-latches 633, 634.

Since the parallel first and second bitline segment sense branches 630 and 640 share a common L latch 620 and C latch 610, the caching functions for the two bitline segments are performed in sequence. For example, during the fourth time period 740, stored data from the first bit line segment may be transferred from the first S-latch 633 to the C-latch 610 for subsequent output. After the data transfer of the first bitline segment is complete, during a fifth time period 750, stored data from the second bitline segment may be transferred from the second S latch 634 to the C latch 610 for subsequent output.

Note that in some embodiments, the time periods 710, 720, and 730 for each bit line segment may be different. For example, the precharge operation of one of the first bitline segment and the second bitline segment may be completed earlier than the precharge operation of the other bitline segment. In this case, one method is that the set-up operations of the first and second bit line segments may be started simultaneously after the precharge operations of the bit line segments of the first and second bit line segments are both completed, and the sensing operations of the first and second bit line segments may be started simultaneously after the set-up operations of the bit line segments of the first and second bit line segments are both completed. Alternatively, the set-up operation for a bit line segment may begin immediately after the precharge operation for the bit line segment is completed. Similarly, a sense operation for a bit line segment may begin immediately after the setup operation for the bit line segment is completed. A bit line segment that first completes a sense operation may enter the data transfer operation directly. Both methods allow the precharge, set-up and sense operations of different bit line segments to be performed simultaneously on a time-parallel basis.

It should be noted that the above description in connection with fig. 5, 6A, 6B and 7 is based on an exemplary page buffer comprising two parallel bitline segment sensing branches. In some other embodiments, the bit lines may be divided into three or more bit line segments sharing a page buffer, the page buffer including three or more parallel bit line segment sensing branches. Each of the three or more parallel bit line segment sensing branches may be connected to the sensing node SO through a separate switch to independently perform a precharge operation, a setup operation, and a sensing operation for the corresponding bit line segment.

Accordingly, the present disclosure provides a page buffer circuit of a 3D NAND device, which allows two or more sensing operations to be simultaneously performed on two or more bit line segments by using the same control signal and without increasing the number of page buffers. Accordingly, the read speed of the 3D NAND device can be increased without increasing the number of MOS in the peripheral circuit, thereby improving product performance while maintaining a compact size of the 3D NAND device.

One aspect of the present disclosure provides a page buffer circuit of a memory device, including: a first bitline segment sensing branch connected to a first bitline segment of a bitline; and a second bitline segment sensing branch connected to a second bitline segment of the bitline; wherein the first bitline segment sensing branch and the second bitline segment sensing branch are connected in parallel to a sensing node of the page buffer circuit.

In some embodiments, the first bitline segment sensing branch comprises a first sensing latch and a first bitline precharge path; and the second bitline segment sensing branch includes a second sensing latch and a second bitline precharge path.

In some embodiments, the first bitline segment sensing branch is connected to the sensing node through a first switch; and the second bitline segment sensing branch is connected to the sensing node through a second switch.

In some embodiments, the first bitline segment is aligned with the second bitline segment along the bitline direction.

In some embodiments, the first bitline segment and the second bitline segment are separately connected with the same memory cell string.

In some embodiments, the memory device is a three-dimensional NAND memory device and the memory cell strings are vertical memory cell stacked strings.

In some embodiments, the page buffer circuit further includes a low voltage latch and a cache latch.

In some embodiments, the first bitline segment sensing branch and the second bitline segment sensing branch are commonly connected to a low voltage latch and a cache latch.

In some embodiments, the page buffer circuit further includes: a third bitline segment sensing branch connected to a third bitline segment of the bitline; wherein the first, second, and third bitline segment sensing branches are connected in parallel to a sensing node of the page buffer circuit.

In some embodiments, the third bitline segment sensing branch includes a third sensing latch and a third bitline precharge path.

In some embodiments, the third bitline segment sensing branch is connected to the sensing node through a third switch.

In some embodiments, the first, second, and third bitline segments are aligned with one another along the bitline direction.

In some embodiments, the first bitline segment, the second bitline segment, and the third bitline segment are separately connected with the same memory cell string.

In some embodiments, the first, second, and third bitline segment sensing branches are commonly connected to a low voltage latch and a cache latch.

Another aspect of the present disclosure provides a memory device including: a plurality of bit lines extending in parallel along a bit line direction, each bit line including at least two bit line segments; and a plurality of page buffers, each page buffer corresponding to one of the plurality of bit lines; wherein at least two bit line segments of each bit line are commonly connected to the same corresponding page buffer.

In some embodiments, each page buffer includes: a first bitline segment sensing branch connected to the first bitline segment; and a second bitline segment sensing branch connected to the second bitline segment; wherein the first bitline segment sensing branch and the second bitline segment sensing branch are connected in parallel to a sensing node of the page buffer circuit.

Another aspect of the present disclosure provides a method of performing a read operation by a memory device, comprising: simultaneously performing a precharge operation, a setup operation, and a sensing operation on at least two bit line segments aligned with each other along a bit line direction through at least two bit line segment sensing branches in a page buffer circuit; wherein at least two bit line segments are respectively connected to at least two bit line segment sensing branches in the same page buffer circuit.

Another aspect of the present disclosure provides a memory system including: a memory device, comprising: a plurality of bit lines extending in parallel along a bit line direction, each bit line including at least two bit line segments, and a plurality of page buffers, each page buffer corresponding to one of the plurality of bit lines; at least two bit line segments of each bit line are connected to the same corresponding page buffer in common; and a memory controller configured to simultaneously perform a precharge operation, a setup operation, and a sensing operation on at least two bit line segments of a corresponding bit line through at least two bit line segment sensing branches in one page buffer circuit.

The foregoing description of the specific embodiments will reveal the general nature of the invention so that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments without undue experimentation and without departing from the general concept of the present invention. Therefore, such changes and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the disclosure and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the present disclosure and guidance.

Embodiments of the present disclosure have been described above with the aid of functional blocks (functional building blocks) illustrating the implementation of specific functions and their relationships. Boundaries of these functional blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries may be defined so long as the specified functions and relationships thereof are appropriately performed.

The summary and abstract sections may set forth one or more, but not all exemplary embodiments of the disclosure as contemplated by one or more inventors, and are therefore not intended to limit the disclosure and the appended claims in any way.

The breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

28页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:特征量计算方法、特征量计算程序及特征量计算装置、筛选方法、筛选程序及筛选装置、化合物创建方法、化合物创建程序及化合物创建装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!