Memory system and operating method thereof

文档序号:1627680 发布日期:2020-01-14 浏览:6次 中文

阅读说明:本技术 存储器系统及该存储器系统的操作方法 (Memory system and operating method thereof ) 是由 罗炯柱 李宗珉 于 2019-02-18 设计创作,主要内容包括:本发明公开一种存储器系统,该存储器系统包括:存储器装置,具有开放块和封闭存储块;页面计数单元,每当数据被编程在开放块中时对开放块中的编程页面的数量进行计数,并且对封闭存储块的有效页面的数量进行计数;有效页面减少量计数单元,计算映射更新操作前后封闭存储块中减少的有效页面的总和;以及垃圾收集单元,当包括在存储器装置中的空闲块的数量小于第一阈值并大于第二阈值,并且开放块中的编程页面的数量与减少的有效页面的总和的比率大于或等于第四阈值时,对牺牲块执行垃圾收集操作。(The invention discloses a memory system, comprising: a memory device having an open block and a closed memory block; a page counting unit counting the number of program pages in the open block whenever data is programmed in the open block, and counting the number of valid pages of the closed memory block; an effective page reduction amount counting unit which calculates the sum of the reduced effective pages in the closed storage blocks before and after the mapping updating operation; and a garbage collection unit performing a garbage collection operation on the victim block when the number of free blocks included in the memory device is less than a first threshold and greater than a second threshold, and a ratio of the number of program pages in the open block to a sum of the reduced valid pages is greater than or equal to a fourth threshold.)

1. A memory system, comprising:

a memory device including an open block and a closed memory block;

a page counting unit counting the number of program pages in the open block every time data is programmed in the open block and counting the number of valid pages of the closed memory block;

an effective page reduction amount counting unit which calculates the sum of the reduced effective pages in the closed storage block before and after the mapping updating operation; and

a garbage collection unit performing a garbage collection operation on a victim block when the number of free blocks included in the memory device is less than a first threshold and greater than a second threshold, and a ratio of the number of program pages in the open block to a sum of reduced valid pages is greater than or equal to a fourth threshold.

2. The memory system of claim 1, further comprising:

an emergency sensing unit to count the number of free blocks and compare the number of free blocks with the first threshold and the second threshold, respectively.

3. The memory system of claim 2, wherein the garbage collection unit is to perform the garbage collection operation on the victim block when the number of free blocks is less than the second threshold.

4. The memory system according to claim 2, wherein the page counting unit counts the number of valid pages before an initial mapping update operation is performed when the number of free blocks is smaller than the first threshold and larger than the second threshold, and counts the number of valid pages after a final mapping update operation is performed when the number of programmed pages of the open block exceeds a third threshold.

5. The memory system according to claim 1, wherein the valid page reduction amount counting unit counts the number of valid pages reduced before and after the map update operation for each closed memory block.

6. The memory system of claim 5, wherein the sum of the number of reduced valid pages in the closed memory block comprises a sum of the valid page reduction amounts counted for each closed memory block.

7. The memory system of claim 2, wherein the free blocks are blocks for which a number of empty pages is greater than or equal to a predetermined threshold.

8. The memory system of claim 1, wherein the open block comprises a memory block that is performing a programming operation.

9. The memory system of claim 8, wherein the closed memory blocks comprise memory blocks having a non-zero number of valid pages and not the open blocks.

10. The memory system of claim 1, wherein the garbage collection unit performs the garbage collection operation by copying valid data of the victim block into an empty page of a target block.

11. A method for operating a memory system, comprising:

counting a number of program pages in an open block and counting a number of valid pages of a closed memory block each time data is programmed in the open block;

calculating the sum of the reduced effective pages in the closed storage block before and after the mapping updating operation; and is

Performing a garbage collection operation on a victim block when a number of free blocks included in the memory device is less than a first threshold and greater than a second threshold, and a ratio of the number of program pages in the open block to a sum of reduced valid pages is greater than or equal to a fourth threshold.

12. The method of claim 11, further comprising:

counting the number of free blocks and comparing the number of free blocks with the first threshold and the second threshold, respectively.

13. The method of claim 12, wherein performing the garbage collection operation comprises:

performing the garbage collection operation on the victim block when the number of free blocks is less than the second threshold.

14. The method of claim 12, wherein counting the number of programming pages comprises:

counting a number of valid pages before an initial mapping update operation is performed when the number of free blocks is less than the first threshold and greater than the second threshold, and counting a number of valid pages after a final mapping update operation is performed when the number of programmed pages of the open block exceeds a third threshold.

15. The method of claim 11, wherein calculating a sum of reduced valid pages in the closed memory block before and after the map update operation comprises:

counting the number of valid pages that decrease before and after the map update operation for each closed memory block.

16. The method of claim 15, wherein the sum of the number of active pages decremented in the closed memory block comprises a sum of the active page decrement amounts counted for each closed memory block.

17. The method of claim 12, wherein the free blocks are blocks for which a number of empty pages is greater than or equal to a predetermined threshold.

18. The method of claim 11, wherein the open block comprises a memory block that is performing a programming operation.

19. The method of claim 18, wherein the closed memory block comprises a memory block whose number of valid pages is not zero and is not the open block.

20. The method of claim 11, wherein performing the garbage collection operation comprises:

performing the garbage collection operation by copying valid data of the victim block into an empty page of a target block.

Technical Field

Exemplary embodiments of the present invention relate to a memory system, and more particularly, to a memory system that can efficiently perform a garbage collection operation and a method for operating the same.

Background

Computer environment paradigms have turned into pervasive computing that enables computing systems to be used anytime and anywhere. Therefore, the use of portable electronic devices such as mobile phones, digital cameras, and laptop computers has been rapidly increasing. These portable electronic devices typically use a memory system having one or more memory devices to store data. The memory system may be used as a primary memory device or a secondary memory device of the portable electronic device.

The memory system provides superior stability, durability, high information access speed, and low power consumption, compared to a hard disk device, because the memory system has no moving parts. Examples of memory systems having these advantages include Universal Serial Bus (USB) memory devices, memory cards with various interfaces, and Solid State Drives (SSDs).

Disclosure of Invention

Embodiments of the present invention relate to a memory system that can efficiently perform garbage collection operations.

According to an embodiment of the present invention, a memory system includes: a memory device including an open block and a closed memory block; a page counting unit adapted to count the number of program pages in the open block and count the number of valid pages of the closed memory block whenever data is programmed in the open block; the effective page reduction amount counting unit is suitable for calculating the sum of the reduced effective pages in the closed storage blocks before and after the mapping updating operation; and a garbage collection unit adapted to perform a garbage collection operation on the victim block when the number of free blocks included in the memory device is less than a first threshold and greater than a second threshold, and a ratio of the number of program pages in the open block to a sum of the reduced valid pages is greater than or equal to a fourth threshold.

According to another embodiment of the present invention, a method for operating a memory system includes: counting the number of program pages in the open block each time data is programmed in the open block, and counting the number of valid pages of the closed memory block; calculating the sum of the reduced effective pages in the closed storage blocks before and after the mapping updating operation; and performing a garbage collection operation on the victim block when the number of free blocks included in the memory device is less than the first threshold and greater than the second threshold, and a ratio of the number of program pages in the open block to a sum of the reduced valid pages is greater than or equal to a fourth threshold.

According to another embodiment of the present invention, a memory system includes: a memory device including a plurality of memory blocks including an open block and a closed memory block; and a controller adapted to control the memory device to program the memory device, wherein the controller: programming pages of the open block and reducing the number of valid pages in the closed memory block; and selectively performing a garbage collection operation based on a ratio of the number of free blocks and the number of programmed pages of the open block to the number of valid pages in the reduced closed memory block.

Drawings

FIG. 1 is a block diagram illustrating a data processing system including a memory system according to an embodiment of the present invention.

Fig. 2 is a diagram illustrating a memory device employed in the memory system shown in fig. 1.

Fig. 3 is a circuit diagram illustrating a memory cell array of a memory block in the memory device shown in fig. 1.

Fig. 4 is a block diagram illustrating a structure of a memory device of a memory system according to an embodiment of the present invention.

Fig. 5 is a block diagram illustrating the structure of a memory system according to an embodiment of the present invention.

FIG. 6 is a flow chart illustrating operation of a memory system according to an embodiment of the present invention.

Fig. 7 illustrates a garbage collection operation based on a first threshold and a second threshold.

Fig. 8 illustrates a garbage collection operation based on the third threshold and the fourth threshold.

Fig. 9-17 are diagrams illustrating exemplary applications of data processing systems according to various embodiments of the present invention.

Detailed Description

Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. This invention may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Throughout the disclosure, like reference numerals refer to like parts throughout the various figures and embodiments of the present invention.

It will be understood that, although the terms first, second, third, etc. may be used herein to identify various elements, these elements are not limited by these terms. These terms are used to distinguish one element from another element, which may or may not have the same or similar designation. Therefore, a first element described below may also be referred to as a second element or a third element without departing from the spirit and scope of the present invention.

The drawings are not necessarily to scale and, in some instances, may be exaggerated in scale to clearly illustrate features of embodiments. When an element is referred to as being connected or coupled to another element, it will be understood that the former may be directly connected or coupled to the latter, or electrically connected or coupled to the latter through one or more intervening elements.

It will be further understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly on, connected or coupled to the other element or one or more intervening elements may be present. In addition, it will also be understood that when an element is referred to as being "between" two elements, it can be the only element between the two elements, or one or more intervening elements may also be present.

The terminology used herein is for the purpose of describing particular embodiments and is not intended to be limiting of the invention.

As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms and vice versa, unless the context clearly indicates otherwise.

It will be further understood that the terms "comprises," "comprising," "includes" and "including," when used in this specification, specify the presence of stated elements, but do not preclude the presence or addition of one or more other elements. As used herein, the term "and/or" includes any and all combinations of one or more of the listed items.

Unless otherwise defined, all terms including technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs in light of this disclosure. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of this disclosure and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be practiced without some or all of these specific details. In other instances, well known process structures and/or processes have not been described in detail in order to not unnecessarily obscure the present invention.

It should also be noted that in some instances, features or elements described in connection with one embodiment may be used alone or in combination with other features or elements of another embodiment unless explicitly stated otherwise, as will be apparent to one of ordinary skill in the relevant art.

FIG. 1 is a block diagram illustrating a data processing system 100 including a memory system 110 according to an embodiment of the invention.

Referring to FIG. 1, data processing system 100 may include a host 102 operably coupled to a memory system 110.

The host 102 may include any of a variety of portable electronic devices, such as a mobile phone, an MP3 player, and a laptop computer, or any of a variety of non-portable electronic devices, such as a desktop computer, a game console, a Television (TV), and a projector.

The host 102 may include at least one Operating System (OS) or a plurality of operating systems. The host 102 may execute the OS to perform an operation corresponding to the user request on the memory system 110. Here, the host 102 may provide a plurality of instructions corresponding to the user request to the memory system 110. Thus, the memory system 110 may perform certain operations corresponding to a plurality of instructions, i.e., corresponding to user requests. The OS may manage and control the overall functions and operations of the host 102. The OS may support operations between host 102 and a user using data processing system 100 or memory system 110.

The memory system 110 may operate or perform particular functions or operations in response to requests from the host 102, and in particular, may store data to be accessed by the host 102. The memory system 110 may be used as a primary memory system or a secondary memory system for the host 102. The memory system 110 may be implemented using any of various types of storage devices that may electrically couple with the host 102 according to a protocol of the host interface. Non-limiting examples of the memory system 110 include a Solid State Drive (SSD), a Multi Media Card (MMC), and an embedded MMC (emmc).

The memory system 110 may include various types of storage devices. Non-limiting examples of such storage devices include volatile memory devices such as Dynamic Random Access Memory (DRAM) and static ram (sram), and non-volatile memory devices such as the following: read-only memory (ROM), Masked ROM (MROM), Programmable ROM (PROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), Ferroelectric RAM (FRAM), phase change RAM (PRAM), Magnetoresistive RAM (MRAM), Resistive RAM (RRAM), and flash memory.

Memory system 110 may include a controller 130 and a memory device 150.

The controller 130 and the memory device 150 may be integrated into a single semiconductor device, which may be included in any of various types of memory systems as described above. For example, the controller 130 and the memory device 150 may be integrated into a single semiconductor device to constitute an SSD, a Personal Computer Memory Card International Association (PCMCIA) card, an SD card including mini-SD, micro-SD, and SDHC, and an UFS device. The memory system 110 may be configured as part of a computer, a smart phone, a portable game player, or one of the various components configuring the computing system.

Memory device 150 may be a non-volatile memory device that retains stored data even when power is not supplied. The memory device 150 may store data provided from the host 102 through a write operation and output data stored in the memory device 150 to the host 102 through a read operation. In an embodiment, memory device 150 may include a plurality of memory dies (not shown), and each memory die may include a plurality of planes (not shown). Each plane may include a plurality of memory blocks 152-156, each memory block may include a plurality of pages, and each page may include a plurality of memory cells coupled to a wordline. In an embodiment, the memory device 150 may be a flash memory having a three-dimensional (3D) stack structure.

The structure of the memory device 150 and the 3D stack structure of the memory device 150 will be described in detail below with reference to fig. 2 to 4.

The controller 130 may control the memory device 150 in response to a request from the host 102. For example, the controller 130 may provide data read from the memory device 150 to the host 102 and store the data provided from the host 102 into the memory device 150. For this operation, the controller 130 may control a read operation, a write operation, a program operation, and an erase operation of the memory device 150.

More specifically, controller 130 may include a host interface (I/F)132, a processor 134, an Error Correction Code (ECC) unit 138, a Power Management Unit (PMU)140, a memory interface 142, and a memory 144, all operatively coupled or interfaced via an internal bus.

The host interface 132 may process instructions and data for the host 102. The host interface 132 may communicate with the host 102 through one or more of a variety of interface protocols, such as: universal Serial Bus (USB), multi-media card (MMC), peripheral component interconnect express (PCI-e or PCIe), Small Computer System Interface (SCSI), serial SCSI (sas), Serial Advanced Technology Attachment (SATA), Parallel Advanced Technology Attachment (PATA), Enhanced Small Disk Interface (ESDI), and Integrated Drive Electronics (IDE). The host interface 132 may be driven by firmware, i.e., a Host Interface Layer (HIL), to exchange data with the host 102.

Further, ECC unit 138 may correct erroneous bits of data processed by memory device 150 and may include an ECC encoder and an ECC decoder. The ECC encoder may perform error correction encoding on data to be programmed into the memory device 150 to generate data to which parity bits are added. Data including parity bits may be stored in memory device 150. The ECC decoder may detect and correct errors included in data read from the memory device 150. The ECC unit 138 may perform error correction by coded modulation such as: low Density Parity Check (LDPC) codes, Bose-Chaudhuri-Hocquenghem (BCH) codes, turbo codes, reed-Solomon (eed-Solomon, RS) codes, convolutional codes, Recursive Systematic Codes (RSC), Trellis Coded Modulation (TCM), and Block Coded Modulation (BCM). However, the ECC unit 138 is not limited to these error correction techniques. Thus, ECC unit 138 may include any and all circuits, modules, systems, or devices for performing suitable error correction.

PMU 140 may manage the power used and provided in controller 130.

The memory interface 142 may serve as a memory interface or storage interface between the controller 130 and the memory device 150 so that the controller 130 may control the memory device 150 in response to requests from the host 102.

The memory 144 may serve as a working memory for the memory system 110 and the controller 130, and store data for driving the memory system 110 and the controller 130.

The memory 144 may be a volatile memory. For example, the memory 144 may be a Static Random Access Memory (SRAM) or a Dynamic Random Access Memory (DRAM). The memory 144 may be provided internal or external to the controller 130. Fig. 1 shows the memory 144 disposed within the controller 130. In an embodiment, the memory 144 may be an external volatile memory having a memory interface for transferring data between the memory 144 and the controller 130.

As described above, memory 144 may include program memory, data memory, write buffers/caches, read buffers/caches, data buffers/caches, and map buffers/caches to store some of the data needed to perform data write and read operations between host 102 and memory device 150, as well as other data needed by controller 130 and memory device 150 to perform these operations.

Processor 134 may control the overall operation of memory system 110. The processor 134 may use firmware to control the overall operation of the memory system 110. The firmware may be referred to as a Flash Translation Layer (FTL). Processor 134 may be implemented using a microprocessor or Central Processing Unit (CPU).

For example, the controller 130 may perform operations requested by the host 102 in the memory device 150 through the processor 134 implemented as a microprocessor, CPU, or the like. Also, the controller 130 may perform background operations on the memory device 150 by the processor 134 implemented as a microprocessor or CPU. Background operations performed on the memory device 150 may include: an operation of copying and processing data stored in some memory blocks among the memory blocks 152 to 156 of the memory device 150 into other memory blocks, for example, a Garbage Collection (GC) operation; an operation of performing an exchange between the memory blocks 152 to 156 or between data of the memory blocks 152 to 156, for example, a Wear Leveling (WL) operation; an operation of storing the mapping data stored in the controller 130 into the storage blocks 152 to 156, for example, a mapping clear operation; or an operation of managing a bad block of the memory device 150, for example, a bad block management operation of detecting and processing a bad block among the memory blocks 152 to 156 in the memory device 150.

A memory device of a memory system according to an embodiment of the present invention is described in detail with reference to fig. 2 to 4.

Fig. 2 is a diagram illustrating a memory device 150 of the memory system 110 in fig. 1. Fig. 3 is a circuit diagram showing a memory cell array of the memory block 330 in the memory device 150. Fig. 4 is a diagram illustrating a three-dimensional (3D) structure of the memory device 150.

Referring to fig. 2, the memory device 150 may include a plurality of memory blocks BLK0 through BLKN-1, where N is an integer greater than 1. Each of blocks BLK 0-BLKN-1 may include multiple pages, e.g., 2MOr M pages, the number of which may vary depending on the circuit design, M being an integer greater than 1. Each of the pages may include a plurality of memory cells coupled to a plurality of word lines WL.

The memory cells in each of the memory blocks BLK 0-BLKN-1 may be one or more of single-layer cells (SLC) storing 1-bit data or multi-layer cells (MLC) storing 2-bit data. Thus, memory device 150 may comprise either an SLC memory block or an MLC memory block, depending on the number of bits that may be expressed or stored in each memory cell in the memory block. An SLC memory block may include multiple pages implemented with memory cells that each store 1 bit of data. SLC memory blocks may typically have higher data computation performance and higher endurance than MLC memory blocks. An MLC memory block may include multiple pages implemented by memory cells that each store multiple bits (e.g., 2 or more bits) of data. MLC memory blocks may typically have more data storage space than SLC memory blocks, that is, higher integration density. In another embodiment, memory device 150 may include a plurality of Triple Layer Cell (TLC) storage blocks. In yet another embodiment, memory device 150 may include a plurality of four-layer cell (QLC) memory blocks. The TLC block may include a plurality of pages implemented by memory cells each capable of storing 3-bit data. The QLC memory block may include multiple pages implemented by memory cells each capable of storing 4 bits of data.

The memory device 150 may be implemented by any one of the following non-volatile memories: phase Change Random Access Memory (PCRAM), resistive random access memory (RRAM or ReRAM), Ferroelectric Random Access Memory (FRAM), and spin transfer torque magnetic random access memory (STT-RAM (STT-MRAM)).

The memory blocks BLK0 through BLKN-1 may store data transferred from the host 102 through a program operation, and may transfer data stored in the memory blocks to the host 102 through a read operation.

Referring to fig. 3, the memory block 330 may include a plurality of cell strings 340 coupled to a plurality of respective bit lines BL0 through BLm-1. The cell string 340 of each column may include one or more ground selection transistors GST and one or more string selection transistors SST. The plurality of memory cells MC0 through MCn-1 may be coupled in series between the ground selection transistor GST and the string selection transistor SST. In an embodiment, each of the memory cell transistors MC0 through MCn-1 may be implemented by an MLC capable of storing multi-bit data information. Each of the cell strings 340 may be electrically coupled to a respective bit line among a plurality of bit lines BL0 through BLm-1. For example, as shown in FIG. 3, the first cell string is coupled to first bit line BL0, and the last cell string is coupled to last bit line BLm-1.

Although fig. 3 shows a NAND flash memory cell, the present disclosure is not limited thereto. It should be noted that the memory cells may be NOR flash memory cells or hybrid flash memory cells comprising two or more memory cells combined therein. Also, it should be noted that the memory device 150 may be a flash memory device including a conductive floating gate as a charge storage layer, or a charge extraction flash (CTF) memory device including an insulating layer as a charge storage layer.

The memory device 150 may further include a voltage supply device 310, the voltage supply device 310 generating different wordline voltages including a program voltage, a read voltage, and a pass voltage to be supplied to the wordlines according to an operation mode. The voltage generating operation of the voltage supply device 310 may be controlled by a control circuit (not shown). Under the control of the control circuit, the voltage supply device 310 may select at least one of the memory blocks (or sectors) of the memory cell array, select at least one of the word lines of the selected memory block, and supply word line voltages to the selected word line and unselected word lines as needed.

The memory device 150 may include read and write (read/write) circuitry 320 controlled by control circuitry. During verify/normal read operations, read/write circuits 320 may operate as sense amplifiers for reading (sensing and amplifying) data from the memory cell array. During a programming operation, the read/write circuits 320 may operate as write drivers for providing voltages or currents to the bit lines according to data to be stored in the memory cell array. During a program operation, the read/write circuits 320 may receive data to be stored into the memory cell array from a buffer (not shown) and drive the bit lines according to the received data. The read/write circuit 320 may include a plurality of page buffers 322 to 326 corresponding to columns (or bit lines) or column pairs (or bit line pairs), respectively. Each of the page buffers 322-326 may include a plurality of latches (not shown).

Memory device 150 may be implemented by a 2D or 3D memory device. In particular, as shown in fig. 4, the memory device 150 may be implemented by a nonvolatile memory device having a 3D stack structure. When the memory device 150 has a 3D structure, the memory device 150 may include a plurality of memory blocks BLK0 through BLKN-1. Here, fig. 4 is a block diagram illustrating storage blocks 152, 154, and 156 of the memory device 150 illustrated in fig. 1. Each of the memory blocks 152, 154, and 156 may be implemented in a 3D structure (or a vertical structure). For example, memory blocks 152, 154, and 156 may include a three-dimensional structure extending in a first direction to a third direction, e.g., an x-axis direction, a y-axis direction, and a z-axis direction.

Each of the memory blocks 330 included in the memory device 150 may include a plurality of NAND strings NS extending in the second direction and a plurality of NAND strings NS extending in the first and third directions. Here, each of the NAND strings NS may be coupled to a bit line BL, at least one string select line SSL, at least one ground select line GSL, a plurality of word lines WL, at least one dummy word line DWL, and a common source line CSL, and each of the NAND strings NS may include a plurality of transistor structures TS.

Briefly, each memory block 330 among the memory blocks 152, 154, and 156 of the memory device 150 may be coupled to a plurality of bit lines BL, a plurality of string select lines SSL, a plurality of ground select lines GSL, a plurality of word lines WL, a plurality of dummy word lines DWL, and a plurality of common source lines CSL, and each memory block 330 may include a plurality of NAND strings NS. Also, in each memory block 330, one bit line BL may be coupled to a plurality of NAND strings NS to implement a plurality of transistors in one NAND string NS. Also, the string selection transistor SST of each NAND string NS may be coupled to the corresponding bit line BL, and the ground selection transistor GST of each NAND string NS may be coupled to the common source line CSL. Here, the memory cell MC may be disposed between the string selection transistor SST and the ground selection transistor GST of each NAND string NS. In other words, multiple memory cells may be implemented in each of memory blocks 330 of memory blocks 152, 154, and 156 of memory device 150.

Flash memory may perform program operations and read operations on a page basis, perform erase operations on a block basis, and may not support rewrite operations unlike a hard disk. Thus, the flash memory may program the corrected data into a new page and invalidate the page of the original data to correct the original data programmed into the page.

A garbage collection operation may refer to an operation of periodically converting invalid pages into empty pages in order to prevent the flash memory space from being wastefully used due to invalid pages in a process of correcting data. The garbage collection operation may include copying data programmed in the valid pages of the victim block into the empty pages of the target block. Although memory space is restored from garbage collection operations, execution of foreground operations executed in response to requests from the host 102 of FIG. 1 may be degraded.

In the case where execution of a foreground operation is given priority over acquisition of memory space, execution of the foreground operation can be prevented from being degraded by reducing the frequency of executing garbage collection operations. However, in order to reduce the frequency of performing garbage collection operations, the necessity of acquiring memory space must be low, that is, the workload of slowly generating victim blocks needs to be detected.

In the case of a workload in which user data is collectively programmed in empty pages that are not programmed with data, the number of valid pages of a closed memory block may be increased according to a program operation, but the number of invalid pages may not be increased. Since the target block for the garbage collection operation, i.e., the victim block, is not generated by the program operation, the execution of the foreground operation can be maintained by reducing the frequency of performing the garbage collection operation even if the memory space is insufficient. In various embodiments, when the number of valid pages in the closed memory block that decreases within a predetermined time is less than a predetermined threshold, the controller 130 may detect the current workload as a workload in which the user data is collectively programmed in the empty pages.

In various embodiments, even in the case where the number of free blocks is insufficient, the controller 130 may maintain the execution of the foreground operation by detecting the workload information based on the ratio of the number of programming pages (i.e., Δ PGM) to the sum of the effective page reduction amounts (i.e., Σ Δ VPC), and dynamically changing the frequency of executing the garbage collection operation.

FIG. 5 is a block diagram illustrating a memory system 110 according to an embodiment of the invention. It should be noted that fig. 5 shows only the constituent elements of data processing system 100 of fig. 1 that are relevant to the present invention.

As described above, memory system 110 may include memory device 150 and controller 130. The controller 130 may control a programming operation of the memory device 150 and perform a garbage collection operation to restore a memory space.

Referring to fig. 5, the controller 130 may include an urgent sensing unit 502, a page counting unit 504, a map updating unit 506, a valid page reduction amount counting unit 508, a workload detection unit 510, and a garbage collection unit 512. The urgent sensing unit 502, the page counting unit 504, the mapping updating unit 506, the valid page decrement amount counting unit 508, the workload detection unit 510, and the garbage collection unit 512 include all circuits, systems, software, firmware, and devices required for their respective operations and functions.

The emergency sensing unit 502 may count the number of free blocks for which the number of empty pages is greater than or equal to a predetermined threshold. When the number of idle blocks is less than a first threshold value TH1And is greater than or equal to a second threshold value TH2The emergency sensing unit 502 may send the trigger SignaltrigTo the page count unit 504. When the number of counted idle blocks is less than a second threshold value TH2The emergency sensing unit 502 may send the trigger SignaltrigTo the garbage collection unit 512. When the number of idle blocks is less than a second threshold TH2The garbage collection unit 512 may perform unconditional garbage collection operations to preferentially perform operations for obtaining memory space. This will be described later.

The page counting unit 504 may be configured to count the number of pages according to the received trigger SignaltrigNumber of valid pages VPC for each of the closed memory blocks included in memory device 150BeforeCounting is performed. Page count unit 504 may count the number of valid pages VPC for each of the closed memory blocksBeforeStored as the first valid page count number. Each of the closed memory blocks may refer to a memory block in which the number of valid pages is not "0" and is not an open block in which a current program operation is performed.

Furthermore, the page count unit 504 may be configured to count the number of pages according to the received trigger SignaltrigThe number of pages programmed in the open block Δ PGM is counted, and hereinafter, the number of pages programmed in the open block Δ PGM is referred to as the number of programmed pages Δ PGM. Slave trigger SignaltrigFrom the time provided, the page count unit 504 may increase the number of program pages Δ PGM whenever user data is programmed in a page of an open block.

The mapping updating unit 506 may update address information of the user data programmed in the memory block. The mapping updating unit 506 may periodically update an address changed as the original data programmed in the page is corrected and changed. When performing the mapping update operation, the mapping update unit 506 may send a completion SignalcompleteTo the page count unit 504.

The page count unit 504 may be configured to count the page according to the completion SignalcompleteWill be completed until Signal Signal is completedcompleteThe number of program pages Δ PGM counted when supplied and the third threshold value TH3A comparison is made. When the counted number Δ PGM of the program pages is greater than the third threshold value TH3The page count unit 504 is capable of counting the number of valid pages VPC of each of the closed memory blocks included in the memory device 150AfterCounting is performed.

Page count unit 504 may target each of the closed memory blocksNumber of valid pages counted VPCAfterThe second valid page count number is stored. The page counting unit 504 may info information on the number of counted program pagesΔPGMTransferred to the workload detection unit 510, and hereinafter, information info on the number of counted program pagesΔPGMReferred to as program page count number information. In addition, the page count unit 504 may store information info on the stored first and second valid page count numbersVPCTransferred to the effective page decrement count unit 508, and hereinafter, information info on the stored first and second effective page count numbersVPCReferred to as valid page information.

According to an embodiment of the present invention, the page count unit 504 may count the first valid page number VPCBeforeCounting is performed to determine whether a workload from the host 102 of fig. 1, hereinafter a host workload, is a workload in which user data is collectively programmed in an empty page, and after a predetermined number or more of programming operations are performed, the number VPC is counted for a second valid pageAfterCounting is performed. If the number VPC is being counted for the first valid pageBeforeAfter counting, the number VPC of second valid pages is counted in a state where a program operation is not sufficiently performedAfterCounting is performed, it is difficult to determine whether the current workload is a workload in which user data is collectively programmed in an empty page even if the sum Σ Δ VPC of effective page reduction amounts is sufficiently small. For example, when the first valid page count number VPCBeforeCounting, and counting the number VPC of the second effective page after performing five program operationsAfterWhen counting, even if the result is that all the closed memory blocks are reduced by only one valid page, if all the user data received after five program operations are programmed in valid pages instead of empty pages, it is difficult to determine that the current workload is a workload in which the user data is collectively programmed in empty pages.

The effective page decrement count unit 508 may count the effective page decrement amount according to the provided effective page information infoVPCThe effective page reduction Δ VPC is calculated for each closed memory block. The effective page decrement count unit 508 may count the second effective page count number VPCAfterAnd a first valid page count number VPCBeforeThe difference between them is taken as the effective page decrement Δ VPC.

The effective page reduction count unit 508 may obtain a sum Σ Δ VPC of effective page reduction amounts based on the effective page reduction amount Δ VPC calculated for each of the closed memory blocks. The effective page reduction amount count unit 508 may obtain a sum of a plurality of effective page reduction amounts Δ VPC calculated for each of the closed memory blocks as a sum Σ Δ VPC of effective page reduction amounts. The effective page reduction amount count unit 508 may include information info about the sum of the effective page reduction amountsΣΔVPCTo the workload detection unit 510.

The workload detection unit 510 may count the number information info based on the provided program pageΔPGMAnd information info on the sum of the effective page reduction amountsΣΔVPCThe ratio of the number of programmed pages Δ PGM to the sum of the effective page reduction Σ Δ VPC is calculated. The workload detection unit 510 may obtain the ratio of the number of program pages Δ PGM to the sum of effective page reduction Σ Δ VPC by dividing the sum of effective page reduction Σ Δ VPC by the number of program pages Δ PGM.

When the ratio of the number of programming pages Δ PGM to the sum of the effective page reduction amounts Σ Δ VPC is greater than or equal to the fourth threshold TH4In this case, the workload detection unit 510 may output the trigger SignaltrigTo the garbage collection unit 512. When the ratio of the number of programming pages Δ PGM to the sum of the effective page reduction amounts Σ Δ VPC is less than the fourth threshold TH4In this case, the workload detection unit 510 may output the trigger SignaltrigTo the emergency sensing unit 502. When the number of idle blocks is less than a second threshold TH2The emergency sensing unit 502 may send the trigger SignaltrigTo the garbage collection unit 512.

The garbage collection unit 512 may be based on the received trigger SignaltrigA garbage collection operation is performed on the victim block. According to the inventionFor example, the garbage collection unit 512 may detect a memory block having a number of valid pages less than or equal to a predetermined threshold as a victim block. The garbage collection unit 512 may copy data programmed in the valid page of the victim block into the empty page of the target block.

FIG. 6 is a flow chart illustrating the operation of the memory system 110 according to an embodiment of the invention.

Referring to fig. 6, the emergency sensing unit 502 of fig. 5 may compare the number of Free blocks (i.e., # Free BLK) with the first threshold TH by counting the number of Free blocks at step S6021A comparison is made. When the number of idle blocks is greater than or equal to a first threshold TH1(NO at step S602), the emergency sensing unit 502 may continuously match the number of free blocks with the first threshold TH1A comparison is made.

At step S604, when the number of idle blocks is less than a first threshold TH1(yes at step S602), the emergency sensing unit 502 may compare the number of free blocks with the second threshold TH2A comparison is made. When the number of idle blocks is less than a second threshold TH2(yes at step S604), the emergency sensing unit 502 may send a trigger SignaltrigTo the garbage collection unit 512. When the number of idle blocks is greater than or equal to the second threshold value TH2(NO at step S604), the emergency sensing unit 502 may send a trigger Signal SignaltrigTo the page count unit 504. At step S624, the garbage collection unit 512 may be based on the trigger Signal provided at step S604trigA garbage collection operation is performed on the victim block. This will be described later.

Fig. 7 illustrates a garbage collection operation based on a first threshold and a second threshold.

As described above, according to an embodiment of the present invention, a free block may be a block in which the number of valid pages is greater than or equal to a predetermined threshold. For example, when the number of valid pages included in a specific memory block is 100 or more, the memory block may be a free block.

Referring to case 1 of fig. 7, when the number of Free blocks (# Free BLK)701 to 750 included in the memory device 150 is 50, and a first threshold value (i.e., first Th) Th1Is 100 and a second threshold (i.e., a second Th) TH2Is 20, TH is less than the first threshold value because the number of idle blocks 701 to 7501And is greater than or equal to a second threshold value TH2Therefore, the emergency sensing unit 502 can send the trigger SignaltrigTo the page count unit 504 to determine whether to perform a garbage collection operation based on a ratio of the number of program pages Δ PGM to the sum of effective page reduction amounts Σ Δ VPC, which will be described later.

In case 2 of fig. 7, the number of Free blocks (# Free BLK)751 to 760 included in the memory device 150 is 10, and the first threshold value (i.e., first Th) Th1Is 100 and a second threshold (i.e., a second Th) TH2Is 20. Since the number of idle blocks 751 to 760 is less than the second threshold value TH2Therefore, the emergency sensing unit 502 can provide the trigger Signal to the garbage collection unit 512trigA garbage collection operation is performed.

According to an embodiment of the invention, even when the number of free blocks is less than the first threshold value TH1Meanwhile, the emergency sensing unit 502 may not perform the unconditional garbage collection operation as well, and when the current workload is determined as the workload in which the user data is collectively programmed in the empty page based on the ratio of the number of programmed pages Δ PGM to the sum of effective page reduction amounts Σ Δ VPC, the emergency sensing unit 502 may maintain the performance of the foreground operation by skipping the garbage collection operation, which will be described later. However, when the number of idle blocks is less than the second threshold value TH2The empty space may be obtained by performing an unconditional garbage collection operation, with the obtaining memory space having the highest priority.

Referring back to fig. 6, at step S606, the page count unit 504 may count the page according to the trigger Signal provided at step S604trigNumber of valid pages VPC for each of the closed memory blocks included in memory device 150BeforeCounting is performed. Page count unit 504 may count the number of valid pages VPC for each of the closed memory blocksBeforeStored as the first valid page count number.

At step S608, the page count unit 504 may count the number of pages according to the trigger Signal provided at step S604trigThe number of program pages Δ PGM, which is the number of pages in the open block where the user data is programmed, is counted. From the provision of the trigger Signal Signal at step S604trigThe page counting unit 504 may increase the number of program pages Δ PGM whenever user data is programmed into a page of the open block.

At step S610, the mapping updating unit 506 may update address information of the user data programmed in the memory block. The mapping updating unit 506 may periodically update an address changed as the original data programmed in the page is corrected and changed. When performing a map update operation on all memory blocks included in memory device 150, map update unit 506 may Signal a completion SignalcompleteTo the page count unit 504.

At step S612, the page count unit 504 may count the page according to the completion Signal provided at step S610completeThe number of programmed pages Δ PGM and the third threshold value TH3A comparison is made. From the provision of the trigger Signal Signal at step S604trigUntil the completion Signal is provided at step S610completeThe number of programmed pages Δ PGM is counted. When the number of program pages counted at step S612 is less than or equal to the third threshold TH3(NO at step S612), the page count unit 504 may return to step S608 and repeatedly perform the program operation and the mapping update operation until the number Δ PGM of program pages in the open block exceeds the third threshold TH3

At step S614, when the number of program pages Δ PGM counted at step S612 is greater than the third threshold TH3(YES at step S612), the page count unit 504 may count the number of valid pages VPC for each of the closed blocks included in the memory device 150AfterCounting is performed. Page count unit 504 may count the number of valid pages VPC for each of the closed memory blocksAfterStored as the second valid page count numberAmount of the compound (A). The page count unit 504 may count the programmed page count number information infoΔPGMIs transmitted to the workload detection unit 510 and the effective page information info is transmittedVPCTo the valid page decrement count unit 508.

At step S616, the effective page decrement amount count unit 508 may count the effective page decrement amount based on the effective page information info provided at step S614VPCThe effective page reduction Δ VPC is calculated for each closed memory block. The effective page decrement count unit 508 may count the second effective page count number VPCAfterAnd the first valid page count number VPCBeforeThe difference between them is taken as the effective page decrement Δ VPC.

At step S618, the effective page reduction amount count unit 508 may obtain the sum Σ Δ VPC of the effective page reduction amount based on the effective page reduction amount Δ VPC calculated at step S616. The effective page reduction amount count unit 508 may obtain a sum of a plurality of effective page reduction amounts Δ VPC calculated for each of the closed memory blocks as a sum Σ Δ VPC of effective page reduction amounts. The effective page reduction amount count unit 508 may include information info about the sum of the effective page reduction amountsΣΔVPCTo the workload detection unit 510.

At step S620, the workload detection unit 510 may count the number information info based on the programmed page provided at step S614ΔPGMAnd information info on the sum of effective page reduction amounts supplied at step S618ΣΔVPCThe ratio of the number of programmed pages Δ PGM to the sum of the effective page reduction Σ Δ VPC is calculated. The workload detection unit 510 may obtain the ratio of the number of program pages Δ PGM to the sum of effective page reduction Σ Δ VPC by dividing the sum of effective page reduction Σ Δ VPC by the number of program pages Δ PGM.

At step S622, when the ratio of the number of program pages Δ PGM acquired at step S620 to the sum Σ Δ VPC of effective page reduction amounts is greater than or equal to the fourth threshold TH4(NO at step S622), the job detection unit 510 may output a trigger Signal SignaltrigTo the garbage collection unit 512. When the number of pages Δ PGM is programmedA ratio to a sum Σ Δ VPC of effective page reduction amounts is smaller than the fourth threshold TH4(yes at step S622), the workload detection unit 510 may return to step S604, and when the number of free blocks is less than the second threshold TH2Then, the trigger Signal istrigTo the garbage collection unit 512.

Fig. 8 illustrates a garbage collection operation based on the third threshold and the fourth threshold.

In case 1 of fig. 8, when the third threshold value and the fourth threshold value (i.e., the third Th and the fourth Th) Th3And TH4500 and 0.1, respectively, and the sum Σ Δ VPC of the number of program pages Δ PGM and the effective page reduction amount is 1000 and 50, respectively, the ratio of the number of program pages Δ PGM to the sum Σ Δ VPC of the effective page reduction amount is 0.05, and 0.05 is less than the fourth threshold TH4The workload detection unit 510 may maintain the execution of the foreground operation by detecting a workload in which a current workload is collectively programmed in an empty page as user data and skipping a garbage collection operation.

In case 2 of fig. 8, when the third threshold value and the fourth threshold value (i.e., the third Th and the fourth Th) Th3And TH4500 and 0.1, respectively, and the sum Σ Δ VPC of the number of program pages Δ PGM and the effective page reduction amount is 1000 and 200, respectively, the ratio of the number of program pages Δ PGM to the sum Σ Δ VPC of the effective page reduction amount is 0.2, and 0.2 is greater than the fourth threshold TH4The workload detection unit 510 may output the trigger SignaltrigTo the garbage collection unit 512 and performs a garbage collection operation.

Referring back to fig. 6, at step S624, the garbage collection unit 512 may collect garbage according to the trigger Signal provided at steps S604 and S622trigA garbage collection operation is performed on the victim block. The garbage collection unit 512 may perform a garbage collection operation by copying data programmed in the valid page of the victim block into the empty page of the target block and retrieving the memory space of the victim block.

When the number of idle blocks is less than a first threshold value TH1A memory system according to embodiments of the present invention may not perform unconditional garbage collection operationsAnd dynamically changing the frequency of performing the garbage collection operation according to a ratio of the number of programming pages Δ PGM to the sum of effective page reduction Σ Δ VPC.

According to an embodiment of the present invention, when the ratio of the number of programming pages Δ PGM to the sum of the effective page reduction amounts Σ Δ VPC is less than the fourth threshold value TH4In this case, the memory system may maintain execution of foreground operations in the event that the number of free blocks is insufficient by determining the current workload as a workload to program empty pages with data without an increase in the number of invalid pages and skipping garbage collection operations.

Hereinafter, a data processing system and an electronic device to which the memory system 110 including the memory device 150 and the controller 130 described above by referring to fig. 1 to 8 according to an embodiment of the present invention may be applied will be described in detail with reference to fig. 9 to 17.

Fig. 9 is a diagram showing another example of a data processing system including a memory system according to the embodiment. For example, fig. 9 shows a memory card system 6100 to which the memory system is applicable.

Referring to fig. 9, a memory card system 6100 may include a memory controller 6120, a memory device 6130, and a connector 6110.

More specifically, the memory controller 6120 may be electrically connected to a memory device 6130 implemented by a non-volatile memory (NVM), and the memory controller 6120 may be configured to access the memory device 6130 implemented by the non-volatile memory (NVM). For example, the memory controller 6120 may be configured to control read operations, write operations, erase operations, and background operations of the memory device 6130. The memory controller 6120 may be configured to provide an interface between the memory device 6130 and a host, and use firmware to control the memory device 6130. That is, the memory controller 6120 may correspond to the controller 130 of the memory system 110 described with reference to fig. 1, and the memory device 6130 may correspond to the memory device 150 of the memory system 110 described with reference to fig. 1.

Thus, the memory controller 6120 may include Random Access Memory (RAM), a processor, a host interface, a memory interface, and error correction components.

The memory controller 6120 may communicate with an external device, such as the host 102 of FIG. 1, through the connector 6110. For example, as described with reference to fig. 1, the memory controller 6120 may be configured to communicate with external devices through one or more of a variety of communication protocols, such as: universal Serial Bus (USB), multimedia card (MMC), embedded MMC (emmc), Peripheral Component Interconnect (PCI), PCI express (pcie), Advanced Technology Attachment (ATA), serial ATA, parallel ATA, Small Computer System Interface (SCSI), Enhanced Small Disk Interface (ESDI), electronic Integrated Drive (IDE), firewire, Universal Flash (UFS), wireless fidelity (Wi-Fi or WiFi), and bluetooth. Therefore, the memory system and the data processing system according to the present embodiment may be applied to wired/wireless electronic devices or dedicated mobile electronic devices.

The memory device 6130 may be implemented by a non-volatile memory (NVM). For example, the memory device 6130 may be implemented with any of a variety of non-volatile memory devices, such as: erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), NAND flash memory, NOR flash memory, phase change RAM (PRAM), resistive RAM (ReRAM), Ferroelectric RAM (FRAM), and spin transfer torque magnetic RAM (STT-RAM).

The memory controller 6120 and the memory device 6130 may be integrated into a single semiconductor device to form a Solid State Drive (SSD). Also, the memory controller 6120 and memory device 6130 may be integrated as such to form a memory card such as: PC cards (personal computer memory card international association (PCMCIA)), Compact Flash (CF) cards, smart media cards (e.g., SM and SMC), memory sticks, multimedia cards (e.g., MMC, RS-MMC, micro MMC, and eMMC), Secure Digital (SD) cards (e.g., SD, mini SD, micro SD, and SDHC), and/or Universal Flash (UFS).

Fig. 10 is a diagram illustrating another example of a data processing system 6200 including a memory system according to an embodiment.

Referring to fig. 10, data processing system 6200 may include a memory device 6230 having one or more non-volatile memories (NVMs) and a memory controller 6220 for controlling memory device 6230. As described with reference to fig. 1, the data processing system 6200 shown in fig. 10 may be used as a storage medium such as a memory card (e.g., CF, SD, micro-SD, etc.) or a USB device. The memory device 6230 may correspond to the memory device 150 in the memory system 110 shown in fig. 1, and the memory controller 6220 may correspond to the controller 130 in the memory system 110 shown in fig. 1.

The memory controller 6220 may control a read operation, a write operation, or an erase operation on the memory device 6230 in response to a request of the host 6210. The memory controller 6220 can include one or more Central Processing Units (CPU)6221, a buffer memory such as a Random Access Memory (RAM)6222, Error Correction Code (ECC) circuitry 6223, a host interface 6224, and a memory interface such as a non-volatile memory (NVM) interface 6225.

The CPU6221 may control overall operations on the memory device 6230, such as a read operation, a write operation, a file system management operation, and a bad page management operation. The RAM6222 is operable according to control of the CPU6221, and functions as a work memory, a buffer memory, or a cache memory. When the RAM6222 is used as a working memory, data processed by the CPU6221 may be temporarily stored in the RAM 6222. When RAM6222 is used as a buffer memory, the RAM6222 can be used to buffer data transferred from the host 6210 to the memory device 6230, or data transferred from the memory device 6230 to the host 6210. When RAM6222 is used as cache memory, the RAM6222 may assist the low-speed memory device 6230 in operating at high speed.

The ECC circuit 6223 may correspond to the ECC unit 138 of the controller 130 shown in fig. 1. As described with reference to fig. 1, the ECC circuit 6223 may generate an Error Correction Code (ECC) for correcting a failed bit or an error bit of data provided from the memory device 6230. ECC circuitry 6223 may perform error correction coding on data provided to memory device 6230, thereby forming data having parity bits. The parity bits may be stored in the memory device 6230. The ECC circuit 6223 may perform error correction decoding on data output from the memory device 6230. The ECC circuit 6223 may correct errors using parity bits. For example, as described with reference to fig. 1, the ECC circuit 6223 may correct errors using a Low Density Parity Check (LDPC) code, a bose-chard-huckham (BCH) code, a turbo code, a reed-solomon (RS) code, a convolutional code, a Recursive Systematic Code (RSC), or a coded modulation such as Trellis Coded Modulation (TCM) or Block Coded Modulation (BCM).

The memory controller 6220 may transmit/receive data to/from the host 6210 through the host interface 6224. Memory controller 6220 may transmit/receive data to/from memory device 6230 through NVM interface 6225. The host interface 6224 may be connected to the host 6210 by a Parallel Advanced Technology Attachment (PATA) bus, a Serial Advanced Technology Attachment (SATA) bus, a Small Computer System Interface (SCSI), a Universal Serial Bus (USB), a peripheral component interconnect express (PCIe), or a NAND interface. The memory controller 6220 may have a wireless communication function using a mobile communication protocol such as wireless fidelity (WiFi) or Long Term Evolution (LTE). The memory controller 6220 may connect to an external device such as the host 6210 or other external device, and then transmit/receive data to/from the external device. In particular, since the memory controller 6220 is configured to communicate with an external device according to one or more of various communication protocols, the memory system and the data processing system according to the embodiment may be applied to wired/wireless electronic devices, particularly mobile electronic devices.

Fig. 11 is a diagram showing another example of a data processing system including a memory system according to the embodiment. For example, fig. 11 shows a Solid State Drive (SSD)6300 to which the memory system is applicable.

Referring to fig. 11, the SSD6300 may include a controller 6320 and a memory device 6340 including a plurality of nonvolatile memories (NVMs). The controller 6320 may correspond to the controller 130 in the memory system 110 of fig. 1, and the memory device 6340 may correspond to the memory device 150 in the memory system of fig. 1.

More specifically, the controller 6320 may be connected to the memory device 6340 through a plurality of channels CH1 through CHi. The controller 6320 may include one or more processors 6321, Error Correction Code (ECC) circuitry 6322, a host interface 6324, a buffer memory 6325, and a memory interface such as a non-volatile memory interface 6326.

The buffer memory 6325 may temporarily store data supplied from the host 6310 and data supplied from the plurality of flash memories NVM included in the memory device 6340. Further, the buffer memory 6325 may temporarily store metadata of a plurality of flash memories NVM, for example, mapping data including a mapping table. The buffer memory 6325 may be implemented by any of various volatile memories such as Dynamic Random Access Memory (DRAM), Synchronous DRAM (SDRAM), Double Data Rate (DDR) SDRAM, low power DDR (lpddr) SDRAM, and graphics RAM (gram), or various non-volatile memories such as ferroelectric RAM (fram), resistive RAM (RRAM or ReRAM), spin transfer torque magnetic RAM (STT-MRAM), and phase change RAM (pram). Fig. 11 illustrates the implementation of a buffer memory 6325 in the controller 6320. However, the buffer memory 6325 may be external to the controller 6320.

The ECC circuit 6322 may calculate an Error Correction Code (ECC) value of data to be programmed to the memory device 6340 during a program operation, perform an error correction operation on data read from the memory device 6340 based on the ECC value during a read operation, and perform an error correction operation on data recovered from the memory device 6340 during a fail data recovery operation.

The host interface 6324 may provide an interface function with an external device such as the host 6310, and the nonvolatile memory interface 6326 may provide an interface function with the memory device 6340 connected through a plurality of channels.

Further, a plurality of SSDs 6300 to which the memory system 110 of fig. 1 can be applied may be provided to implement a data processing system such as a Redundant Array of Independent Disks (RAID) system. The RAID system may include a plurality of SSDs 6300 and a RAID controller for controlling the plurality of SSDs 6300. When the RAID controller performs a programming operation in response to a write instruction provided from the host 6310, the RAID controller may select one or more memory systems or SSDs 6300 from the SSDs 6300 according to a plurality of RAID levels, that is, RAID level information of the write instruction provided from the host 6310, and output data corresponding to the write instruction to the selected SSDs 6300. Further, when the RAID controller performs a read operation in response to a read instruction provided from the host 6310, the RAID controller may select one or more memory systems or SSDs 6300 from the SSDs 6300 according to a plurality of RAID levels, that is, RAID level information of the read instruction provided from the host 6310, and provide the host 6310 with data read from the selected SSDs 6300.

Fig. 12 is a diagram showing another example of a data processing system including a memory system according to the embodiment. For example, fig. 12 shows an embedded multimedia card (eMMC)6400 to which the memory system is applicable.

Referring to fig. 12, the eMMC 6400 may include a controller 6430 and a memory device 6440 implemented by one or more NAND flash memories. The controller 6430 may correspond to the controller 130 in the memory system 110 of fig. 1. The memory device 6440 may correspond to the memory device 150 in the memory system 110 of fig. 1.

More specifically, the controller 6430 may be connected to the memory device 6440 through a plurality of channels. The controller 6430 may include one or more cores 6432, a host interface 6431, and a memory interface such as a NAND interface 6433.

The kernel 6432 may control the overall operation of the eMMC 6400, the host interface 6431 may provide an interface function between the controller 6430 and the host 6410, and the NAND interface 6433 may provide an interface function between the memory device 6440 and the controller 6430. For example, the host interface 6431 may be used as a parallel interface for an MMC interface, for example as described with reference to fig. 1. In addition, host interface 6431 may be used as a serial interface for, for example, a very high speed (UHS) -i/UHS-ii interface.

Fig. 13 to 16 are diagrams showing other examples of a data processing system including a memory system according to an embodiment. For example, fig. 13 to 16 show a Universal Flash Storage (UFS) system to which the memory system is applicable.

Referring to fig. 13-16, UFS systems 6500, 6600, 6700, 6800 may include hosts 6510, 6610, 6710, 6810, UFS devices 6520, 6620, 6720, 6820, and UFS cards 6530, 6630, 6730, 6830, respectively. Hosts 6510, 6610, 6710, 6810 can function as application processors for wired/wireless electronic devices or, in particular, mobile electronic devices, UFS devices 6520, 6620, 6720, 6820 can function as embedded UFS devices, and UFS cards 6530, 6630, 6730, 6830 can function as external embedded UFS devices or removable UFS cards.

Hosts 6510, 6610, 6710, 6810 in each UFS system 6500, 6600, 6700, 6800, UFS devices 6520, 6620, 6720, 6820, and UFS cards 6530, 6630, 6730, 6830 may communicate with external devices such as wired/wireless electronic devices or, in particular, mobile electronic devices through the UFS protocol, and UFS devices 6520, 6620, 6720, 6820 and UFS cards 6530, 6630, 6730, 6830 may be implemented by memory system 110 shown in fig. 1. For example, in UFS systems 6500, 6600, 6700, 6800, UFS devices 6520, 6620, 6720, 6820 may be implemented in the form of a data processing system 6200, SSD6300, or eMMC 6400 described with reference to fig. 10 to 12, and UFS cards 6530, 6630, 6730, 6830 may be implemented in the form of a memory card system 6100 described with reference to fig. 9.

Furthermore, in UFS systems 6500, 6600, 6700, 6800, hosts 6510, 6610, 6710, 6810, UFS devices 6520, 6620, 6720, 6820, and UFS cards 6530, 6630, 6730, 6830 may communicate with each other through UFS interfaces such as MIPI M-PHY and MIPI unified protocol (UniPro) in Mobile Industry Processor Interface (MIPI). Further, UFS devices 6520, 6620, 6720, 6820 and UFS cards 6530, 6630, 6730, 6830 may communicate with each other through any of a variety of protocols other than the UFS protocol, such as: universal Serial Bus (USB), flash drive (UFD), multi-media card (MMC), Secure Digital (SD), mini-SD, and micro-SD.

In UFS system 6500 shown in fig. 13, each of host 6510, UFS device 6520, and UFS card 6530 may comprise UniPro. Host 6510 may perform a swap operation to communicate with UFS device 6520 and UFS card 6530. In particular, host 6510 may communicate with UFS device 6520 or UFS card 6530 through a link layer exchange, such as an L3 exchange at UniPro. UFS device 6520 and UFS card 6530 may communicate with each other through link layer exchanges at UniPro of host 6510. In the illustrated embodiment, one UFS device 6520 and one UFS card 6530 are connected to a host 6510. However, multiple UFS devices and multiple UFS cards may be connected to host 6510 in parallel or in a star. The star format is an arrangement that couples a single device with multiple devices for centralized operation. Multiple UFS cards may be connected in parallel or in a star to UFS device 6520, or in series or in a chain to UFS device 6520.

In UFS system 6600 shown in fig. 14, each of host 6610, UFS device 6620, and UFS card 6630 may include UniPro. Host 6610 may communicate with UFS device 6620 or UFS card 6630 through a switching module 6640 that performs switching operations, e.g., through switching module 6640 that performs link layer switching at UniPro, e.g., an L3 switch. UFS device 6620 and UFS card 6630 may communicate with each other through a link layer exchange of exchange module 6640 at UniPro. In the illustrated embodiment, one UFS device 6620 and one UFS card 6630 are connected to a switching module 6640. However, multiple UFS devices and multiple UFS cards may be connected to switch module 6640 in parallel or in a star. Multiple UFS cards may be connected in series or in a chain to UFS device 6620.

In UFS system 6700 shown in fig. 15, each of host 6710, UFS device 6720, and UFS card 6730 may comprise UniPro. Host 6710 may communicate with UFS device 6720 or UFS card 6730 through switching module 6740 that performs switching operations, e.g., through switching module 6740 that performs link layer switching at UniPro, e.g., L3 switching. UFS device 6720 and UFS card 6730 may communicate with each other through a link layer exchange of exchange module 6740 at UniPro. The switching module 6740 may be integrated with the UFS device 6720 as one module, either inside or outside the UFS device 6720. In the illustrated embodiment, one UFS device 6720 and one UFS card 6730 are connected to a switching module 6740. However, a plurality of modules each including a switching module 6740 and a UFS device 6720 may be connected in parallel or in a star to the host 6710. In another example, multiple modules may be connected to each other in series or in a chain. Furthermore, multiple UFS cards may be connected in parallel or in a star to UFS device 6720.

In UFS system 6800 shown in fig. 16, each of host 6810, UFS device 6820, and UFS card 6830 may include a M-PHY and UniPro. UFS device 6820 may perform an exchange operation to communicate with host 6810 and UFS card 6830. In particular, UFS device 6820 may communicate with host 6810 or UFS card 6830 through a swap operation between the M-PHY and UniPro modules used to communicate with host 6810 and the M-PHY and UniPro modules used to communicate with UFS card 6830, e.g., through a target Identifier (ID) swap operation. Host 6810 and UFS card 6830 can communicate with each other through target ID exchange between the M-PHY and UniPro modules of UFS device 6820. In the illustrated embodiment, one UFS device 6820 is connected to host 6810, and one UFS card 6830 is connected to UFS device 6820. However, multiple UFS devices may be connected to host 6810 in parallel or in a star, or in series or in a chain to host 6810. Multiple UFS cards may be connected to UFS device 6820 in parallel or in a star, or in series or in a chain to UFS device 6820.

FIG. 17 is a diagram illustrating another example of a data processing system including a memory system according to an embodiment of the present invention. For example, fig. 17 is a diagram showing a user system 6900 to which the memory system can be applied.

Referring to fig. 17, the user system 6900 may include a user interface 6910, a memory module 6920, an application processor 6930, a network module 6940, and a storage module 6950.

More specifically, the application processor 6930 may drive components in a user system 6900, such as an Operating System (OS), and include controllers, interfaces, and a graphics engine that control the components included in the user system 6900. The application processor 6930 may be configured as a system on chip (SoC).

The memory module 6920 may serve as a main memory, working memory, buffer memory, or cache memory for the user system 6900. Memory module 6920 may include volatile RAM such as dynamic Random Access Memory (RAM) (DRAM), Synchronous DRAM (SDRAM), Double Data Rate (DDR) SDRAM, DDR2SDRAM, DDR3SDRAM, low power DDR (LPDDR) SDARM, LPDDR2SDRAM, or LPDDR3SDRAM, or non-volatile RAM such as phase change RAM (PRAM), resistive RAM (ReRAM), Magnetoresistive RAM (MRAM), or Ferroelectric RAM (FRAM). For example, the application processor 6930 and the memory module 6920 may be packaged and installed based on package (PoP).

The network module 6940 may communicate with external devices. For example, the network module 6940 may support not only wired communication, but also various wireless communication protocols such as: code Division Multiple Access (CDMA), global system for mobile communications (GSM), wideband CDMA (wcdma), CDMA-2000, Time Division Multiple Access (TDMA), Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), Wireless Local Area Network (WLAN), Ultra Wideband (UWB), bluetooth, wireless display (WI-DI), to communicate with wired/wireless electronic devices, particularly mobile electronic devices. Accordingly, the memory system and the data processing system according to the embodiment of the present invention may be applied to wired/wireless electronic devices. Network module 6940 can be included in applications processor 6930.

The memory module 6950 can store data, such as data received from the application processor 6930, and can then transmit the stored data to the application processor 6930. The memory module 6950 can be implemented by a nonvolatile semiconductor memory device such as the following: phase change ram (pram), magnetic ram (mram), resistive ram (reram), NAND flash memory, NOR flash memory, and 3DNAND flash memory, and the memory module 6950 may be provided as a removable storage medium such as a memory card or an external drive of the user system 6900. The memory module 6950 can correspond to the memory system 110 described with reference to fig. 1. Further, the memory module 6950 may be implemented as an SSD, eMMC, and UFS as described above with reference to fig. 11-16.

The user interface 6910 may comprise an interface for inputting data or instructions to the application processor 6930 or outputting data to an external device. For example, the user interface 6910 may include a user input interface such as: keyboards, keypads, keys, touch panels, touch screens, touch pads, touch balls, cameras, microphones, gyroscope sensors, vibration sensors, and piezoelectric elements, as well as user output interfaces such as: liquid Crystal Displays (LCDs), Organic Light Emitting Diode (OLED) display devices, active matrix OLED (amoled) display devices, LEDs, speakers, and monitors.

Further, when the memory system 110 of fig. 1 is applied to a mobile electronic device of the user system 6900, the application processor 6930 may control the overall operation of the mobile electronic device, and the network module 6940 may be used as a communication module for controlling wired/wireless communication with an external device. The user interface 6910 may display data processed by the processor 6930 on a display/touch module of the mobile electronic device or support a function of receiving data from a touch panel.

According to embodiments of the present invention, a memory system may improve the execution of foreground operations by dynamically changing the periodicity of garbage collection operations based on host workload.

Although the present invention has been described with respect to specific embodiments, it will be apparent to those skilled in the art that various changes and modifications may be made without departing from the technical spirit and scope of the present invention as defined in the claims.

30页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:存储器系统及其操作方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类