Memory system, memory controller, and method of operating memory system

文档序号:135407 发布日期:2021-10-22 浏览:58次 中文

阅读说明:本技术 存储器系统、存储器控制器以及存储器系统的操作方法 (Memory system, memory controller, and method of operating memory system ) 是由 丁仁 于 2020-09-25 设计创作,主要内容包括:本公开的实施例涉及一种存储器系统、存储器控制器以及存储器系统的操作方法。根据本公开的实施例,在更新包括逻辑地址和物理地址之间的映射信息的映射表之前,存储器系统可以分配用于高速缓存映射表中的多个映射段的映射高速缓存区域的一部分作为用于更新映射表的映射更新区域,并且可以将多个映射段的子集加载到映射更新区域。因此,可以在保证高速缓存性能达到预定水平或更高的限制内快速地更新映射表并优化映射表的更新性能。(Embodiments of the present disclosure relate to a memory system, a memory controller, and a method of operating the memory system. According to an embodiment of the present disclosure, before updating a mapping table including mapping information between logical addresses and physical addresses, the memory system may allocate a portion of a mapping cache region for a plurality of mapping segments in a cache mapping table as a mapping update region for updating the mapping table, and may load a subset of the plurality of mapping segments to the mapping update region. Therefore, it is possible to quickly update the mapping table and optimize the update performance of the mapping table within a limit of guaranteeing the cache performance to a predetermined level or more.)

1. A memory system, comprising:

a memory device; and

a memory controller coupled to the memory device and:

allocating a portion of a mapping cache region caching a plurality of mapping segments in a mapping table as a mapping update region updating the mapping table before updating the mapping table including mapping information between logical addresses and physical addresses; and

loading a subset of the plurality of map segments to the map update area.

2. The memory system according to claim 1, wherein the memory controller allocates a part of the map cache area as the map update area when a number of writable pages in an open memory block of the memory device that process a write command received from a host is less than or equal to a threshold number of pages.

3. The memory system of claim 2, wherein the sub-region in the map update region is a sub-region of uncached map segments or a sub-region that caches any one of the N least recently used map segments whose hit count per first time period is less than a threshold hit count.

4. The memory system according to claim 2, wherein the memory controller loads a target map segment to the map update area, the target map segment being a map segment including a logical address of data written to the open memory block among the plurality of map segments.

5. The memory system of claim 4, wherein the memory controller manages the list of target mapped segments in an array.

6. The memory system according to claim 1, wherein when a ratio of write commands received from a host to all commands within a second period of time is greater than or equal to a threshold ratio, the memory controller changes the size of the map update area by comparing a hit rate of the map cache area with a reference hit rate.

7. The memory system according to claim 6, wherein the reference hit rate changes in proportion to a size of the map update area.

8. The memory system according to claim 6, wherein the memory controller increases the size of the mapping update region when a hit rate of the mapping cache region is greater than or equal to the reference hit rate during a third period of time.

9. The memory system according to claim 6, wherein the memory controller reduces the size of the map update area when a hit rate of the map cache area is less than the reference hit rate during a third period of time.

10. A memory controller, comprising:

a memory interface in communication with a memory device; and

a control circuit coupled to the memory device and:

allocating a portion of a mapping cache region caching a plurality of mapping segments in a mapping table as a mapping update region updating the mapping table before updating the mapping table including mapping information between logical addresses and physical addresses; and

loading a subset of the plurality of map segments to the map update area.

11. The memory controller of claim 10, wherein the control circuitry allocates a portion of the map cache area as the map update area when a number of writable pages in an open memory block of the memory device that process write commands received from a host is less than or equal to a threshold number of pages.

12. The memory controller of claim 11, wherein the sub-region in the map update region is a sub-region of uncached map segments or a sub-region that caches any one of the N least recently used map segments whose hit count per first time period is less than a threshold hit count.

13. The memory controller of claim 11, wherein the control circuitry loads a target mapped segment to the map update area, the target mapped segment being a mapped segment among the plurality of mapped segments that includes a logical address of data written to the open memory block.

14. The memory controller of claim 13, wherein the control circuitry manages the list of target mapped segments in an array.

15. The memory controller of claim 10, wherein the control circuitry changes the size of the map update region by comparing a hit rate of the map cache region to a reference hit rate when a ratio of write commands received from a host to all commands within a second time period is greater than or equal to a threshold ratio.

16. The memory controller of claim 15, wherein the reference hit rate changes in proportion to a size of the map update region.

17. A method of operation of a memory system, the memory system including a memory device and a memory controller controlling the memory device, the method comprising:

allocating a portion of a mapping cache region caching a plurality of mapping segments in a mapping table as a mapping update region updating the mapping table before updating the mapping table including mapping information between logical addresses and physical addresses; and

loading a subset of the plurality of map segments to the map update area.

18. The method of claim 17, further comprising: allocating a portion of the mapping cache area as the mapping update area when a number of writable pages in an open memory block of the memory device that process a write command received from a host is less than or equal to a threshold number of pages.

19. The method of claim 18, wherein the sub-region in the map update region is a sub-region of uncached map segments or a sub-region that caches any one of the N least recently used map segments whose hit count per first time period is less than a threshold hit count.

20. The method of claim 17, further comprising: when a ratio of write commands to all commands received from the host within a second period of time is greater than or equal to a threshold ratio, changing a size of the mapping update region based on a result of comparing a hit rate of the mapping cache region with a reference hit rate.

Technical Field

Embodiments of the present disclosure relate to a memory system, a memory controller, and a method of operating the memory system.

Background

Memory systems (e.g., storage devices) store data based on requests from hosts such as: a computer, a mobile terminal (e.g., a smartphone or tablet), or any of a variety of other electronic devices. The memory system may be a device type that stores data in a magnetic disk, such as a Hard Disk Drive (HDD), or a device type that stores data in a nonvolatile memory, such as a Solid State Drive (SSD), a universal flash device (UFS), or an embedded mmc (emmc) device.

The memory system may further include a memory controller for controlling the memory device. The memory controller may receive a command input from a host and, based on the received command, may execute or control an operation for reading, writing, or erasing data in a volatile memory or a nonvolatile memory included in the memory system. The memory controller may drive firmware to perform logical operations for running or controlling such operations.

When performing a read operation or a write operation based on a command received from a host, the memory system manages mapping information between logical addresses of memory requested from the host and physical addresses of the memory device using a mapping table. The memory system may cache the mapping segments in the mapping cache region in order to quickly retrieve mapping information from the mapping table. In addition, the memory system may update the mapping table at certain intervals or upon the occurrence of certain events so as to reflect changes to the mapping table.

Disclosure of Invention

Embodiments of the present disclosure may provide a memory system, a memory controller, and an operating method of the memory system, which can quickly update a mapping table when the mapping table is updated.

In addition, embodiments of the present disclosure may provide a memory system, a memory controller, and an operating method of the memory system, which are capable of optimizing update performance of a mapping table within a limit of guaranteeing cache performance of the mapping table to a predetermined level or more.

In one aspect, embodiments of the present disclosure may provide a memory system including a memory device and a memory controller coupled to the memory device.

Before updating a mapping table including mapping information between logical addresses and physical addresses, the memory controller may allocate a portion of a mapping cache region for a plurality of mapping segments in a cache mapping table as a mapping update region for updating the mapping table.

The memory controller may load a subset of the plurality of map segments into the map update area.

The memory controller may allocate a portion of the map cache area as a map update area when a number of writable pages in an open memory block of the memory device is less than or equal to a threshold number of pages. The open memory block may be a memory block configured to process a write command received from a host.

In this case, the sub-region in the map update region may be 1) a sub-region of uncached mapped segments, or 2) a sub-region that caches any one of the N least recently used mapped segments whose hit count per first time period is less than a threshold hit count. Here, N may be a natural number.

The memory controller may load a target map segment, which is a map segment including a logical address of data written to the open memory block among the plurality of map segments, to the map update area. The memory controller may manage the list of target mapped segments in the form of an array.

When a ratio of write commands received from the host to all commands within the second period of time is greater than or equal to a threshold ratio, the memory controller may change the size of the mapping update region by comparing a hit rate of the mapping cache region with a reference hit rate.

The reference hit rate may change in proportion to the size of the map update area.

For example, when a hit rate of the map cache region is greater than or equal to the reference hit rate during the third period of time, the memory controller may increase the size of the map update region.

For another example, when the hit rate of the map cache region is less than the reference hit rate during the third period of time, the memory controller may decrease the size of the map update region.

In another aspect, embodiments of the present disclosure may provide a memory controller including a memory interface configured to communicate with a memory device and control circuitry coupled to the memory device.

Before updating the mapping table including mapping information between the logical address and the physical address, the control circuit may allocate a part of a mapping cache region for a plurality of mapping segments in the cache mapping table as a mapping update region for updating the mapping table.

The control circuitry may load a subset of the plurality of map segments into the map update area.

The control circuitry may allocate a portion of the map cache area as a map update area when a number of writable pages in an open memory block of the memory device is less than or equal to a threshold number of pages. The open memory block may be a memory block configured to process a write command received from a host.

In this case, the sub-region in the map update region may be 1) a sub-region of uncached mapped segments, or 2) a sub-region that caches any one of the N least recently used mapped segments whose hit count per first time period is less than a threshold hit count. Here, N may be a natural number.

The control circuitry may load a target mapped segment to the map update area, the target mapped segment being a mapped segment including a logical address of data written to the open memory block among the plurality of mapped segments. The control circuitry may manage the list of target mapped segments in an array.

When a ratio of write commands received from the host to all commands within the second period of time is greater than or equal to a threshold ratio, the control circuit may change the size of the mapping update region by comparing a hit rate of the mapping cache region with a reference hit rate.

In this case, the reference hit rate may be changed in proportion to the size of the map update area.

For example, when the hit rate of the map cache area is greater than or equal to the reference hit rate during the third period of time, the control circuit may increase the size of the map update area.

For another example, when the hit rate of the map cache area is less than the reference hit rate in the third period of time, the control circuit may decrease the size of the map update area.

An operation method of a memory system including a memory device and a memory controller configured to control the memory device, the operation method may include: a portion of a mapping cache region for a plurality of mapping segments in a cache mapping table is allocated as a mapping update region for updating the mapping table before updating the mapping table including mapping information between logical addresses and physical addresses.

Additionally, a method of operation of a memory system may include loading a subset of a plurality of map segments into a map update area.

When the number of writable pages in an open memory block of the memory device is less than or equal to a threshold number of pages, a portion of the mapping cache area may be allocated as a mapping update area.

The sub-regions in the mapping update region may be: 1) uncached with a sub-region of mapped segments, or 2) cached with a sub-region of any one of the N least recently used mapped segments whose hit count per first time period is less than a threshold hit count. Here, N is a natural number.

When a ratio of write commands received from the host to all commands within the second period of time is greater than or equal to a threshold ratio, the size of the mapping update region may be changed based on a result of comparing a hit rate of the mapping cache region with a reference hit rate.

In another aspect, embodiments of the present disclosure may provide a memory system including: a memory device including an open memory block; and a controller including a mapping table storing a plurality of mapping segments indicating mappings between logical addresses and physical addresses of the memory device, and a mapping cache region for caching the plurality of mapping segments.

The controller may allocate a portion of the map cache area as a map update area based on the number of writable pages in the open memory block.

The controller may load a subset of the plurality of map segments into the map update area.

The controller may update the mapping table using the loaded subset.

According to the embodiments of the present disclosure, the mapping table may be quickly updated when the mapping table is updated.

In addition, according to the embodiments of the present disclosure, the update performance of the mapping table can be optimized within the limit of guaranteeing the cache performance of the mapping table to a predetermined level or higher.

Drawings

The above and other aspects, features and advantages of the present disclosure will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings in which:

fig. 1 is a schematic diagram showing a configuration of a memory system according to an embodiment of the present disclosure;

FIG. 2 is a block diagram schematically illustrating a memory device, according to an embodiment of the present disclosure;

FIG. 3 is a diagram schematically illustrating a memory block of a memory device, according to an embodiment of the present disclosure;

fig. 4 is a diagram illustrating structures of word lines and bit lines of a memory device according to an embodiment of the present disclosure;

FIG. 5 is a diagram that schematically illustrates the operation of a memory system, in accordance with an embodiment of the present disclosure;

fig. 6 is a diagram illustrating a configuration of an open memory block according to an embodiment of the present disclosure;

FIG. 7 is a flowchart illustrating an example of the operation of a memory system to allocate a map update region according to an embodiment of the present disclosure;

FIG. 8 is a diagram illustrating an example of mapping cache regions, according to an embodiment of the present disclosure;

FIG. 9 is a diagram illustrating an example of sub-regions of the map cache region of FIG. 8 that may be allocated to a map update region;

FIG. 10 is a diagram showing an example of a map segment loaded to a map update area;

FIG. 11 is a flowchart illustrating an example of the operation of the memory system to change the size of the map update area in accordance with an embodiment of the present disclosure;

FIG. 12 is a diagram illustrating an example of a memory system changing a reference hit rate according to an embodiment of the present disclosure;

fig. 13 is a diagram showing an example in which a memory system changes the size of a map update area by comparing the hit rate of a map cache area with a reference hit rate according to an embodiment of the present disclosure;

FIG. 14 is a flow chart illustrating a method of operation of a memory system according to an embodiment of the present disclosure; and

fig. 15 is a diagram illustrating a configuration of a computing system according to an embodiment of the present disclosure.

Detailed Description

Hereinafter, embodiments of the present disclosure are described in detail with reference to the accompanying drawings. Throughout the specification, references to "an embodiment," "another embodiment," and so forth, are not necessarily to one embodiment, and different references to any such phrase are not necessarily to the same embodiment.

Fig. 1 is a schematic diagram showing a configuration of a memory system 100 according to an embodiment of the present disclosure.

Referring to fig. 1, a memory system 100 may include a memory device 110 configured to store data and a memory controller 120 configured to control the memory device 110.

Memory device 110 may include a plurality of memory blocks. Memory device 110 may be configured to operate in response to control signals received from memory controller 120. The operation of memory device 110 may include, for example, a read operation, a program operation (also referred to as a "write operation"), and an erase operation.

Memory device 110 may include a memory cell array that includes a plurality of memory cells (also referred to simply as "cells") configured to store data. The memory cell array may exist inside a memory block.

For example, memory device 110 may be implemented by any of various types of memory such as: double data rate synchronous dynamic random access memory (DDR SDRAM), low power double data rate 4(LPDDR4) SDRAM, Graphics Double Data Rate (GDDR) SDRAM, low power DDR (LPDDR), Rambus Dynamic Random Access Memory (RDRAM), NAND flash memory, vertical NAND flash memory, NOR flash memory, Resistive Random Access Memory (RRAM), phase change memory (PRAM), Magnetoresistive Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), or spin transfer torque random access memory (STT-RAM).

Memory device 110 may be implemented in a three-dimensional array structure. Embodiments of the present disclosure may be applied not only to a flash memory device having a charge storage layer configured as a conductive floating gate but also to a flash memory device having a charge extraction flash (CTF) memory having a charge storage layer configured as an insulating film.

Memory device 110 may be configured to receive a command and an address from memory controller 120 and access a region of the memory cell array selected by the address. That is, the memory device 110 may perform an operation corresponding to the received command in a memory area of the memory device having a physical address corresponding to an address received from the controller.

For example, memory device 110 may perform a program operation, a read operation, and an erase operation. During a programming operation, the memory device 110 may program data in a region selected by an address. During a read operation, the memory device 110 may read data from an area selected by an address. During an erase operation, the memory device 110 may erase data stored in the area selected by the address.

The memory controller 120 may control write operations (or program operations), read operations, erase operations, and background operations with respect to the memory device 110. Background operations may include, for example, Garbage Collection (GC) operations, Wear Leveling (WL) operations, and/or Bad Block Management (BBM) operations.

Memory controller 120 may control the operation of memory device 110 at the request of a host. Alternatively, the memory controller 120 may control the operation of the memory device 110 without a corresponding request by the host, such as when the memory controller 120 performs one or more background operations of the memory device 110.

The memory controller 120 and the host may be separate devices. In another embodiment, memory controller 120 and the host may be integrated and implemented as a single device. In the following description, the memory controller 120 and the host are separate devices.

In fig. 1, the memory controller 120 may include a host interface (I/F)121, a memory interface 122, and a control circuit 123.

The host interface 121 may be configured to provide an interface for communicating with a host.

When receiving a command from the HOST (HOST), the control circuit 123 may receive the command through the HOST interface 121, and may perform an operation of processing the received command.

Memory interface 122 may be connected to memory device 110 to provide an interface for communicating with memory device 110. That is, the memory interface 122 may be configured to provide an interface to the memory device 110 and the memory controller 120 in response to control by the control circuit 123.

The control circuit 123 may be configured to control the operation of the memory device 110 by overall control of the memory controller 120. For example, the control circuit 123 may include a processor 124 and a working memory 125. The control circuit 123 may further include an error detection and correction (detection/correction) circuit (i.e., ECC circuit) 126 and the like.

Processor 124 may control the overall operation of memory controller 120. The processor 124 may perform logical operations. The processor 124 may communicate with a host through the host interface 121. Processor 124 may communicate with memory device 110 through memory interface 122.

Processor 124 may perform the functions of a Flash Translation Layer (FTL). Processor 124 may translate host-provided Logical Block Addresses (LBAs) to Physical Block Addresses (PBAs) via the FTL. The FTL can receive the LBA and convert the LBA to a PBA using a mapping table.

The FTL may employ various address mapping methods according to the mapping unit. Typical address mapping methods include a page mapping method, a block mapping method, and a hybrid mapping method.

The processor 124 may be configured to randomize data received from the host. For example, the processor 124 may randomize data received from the host using a randomization seed. The randomized data is provided to the memory device 110 as data to be stored and programmed in the memory cell array.

During a read operation, processor 124 may be configured to derandomize data received from memory device 110. For example, the processor 124 may use the derandomization seed to derandomize data received from the memory device 110. The derandomized data can be output to the host.

The processor 124 may execute Firmware (FW) to control the operation of the memory controller 120. In other words, the processor 124 may control the overall operation of the memory controller 120, and in order to perform a logical operation, firmware loaded into the working memory 125 may be executed (or driven) during startup.

Firmware refers to a program that runs inside the memory system 100 and may include various functional layers.

For example, the firmware may include a Flash Translation Layer (FTL), a Host Interface Layer (HIL), and/or a Flash Interface Layer (FIL). As described above, the FTL is configured to translate between logical addresses received from a host and physical addresses of the memory devices 110. The HIL is configured to interpret commands issued by a host to the memory system 100 (or storage devices) and pass the commands to the FTL. The FIL is configured to pass commands issued by the FTL to the memory device 110.

For example, the firmware may be stored in the memory device 110 and then loaded into the working memory 125.

Working memory 125 may store firmware, program code, commands, or data(s) necessary to drive memory controller 120. The working memory 125 may include, for example, static ram (sram), dynamic ram (dram), and/or synchronous ram (sdram) as volatile memory.

The error detection/correction circuitry 126 may be configured to detect one or more erroneous bits of the target data using the error correction code and correct the detected erroneous bits. For example, the target data may be data stored in the working memory 125, data retrieved from the memory device 110, or the like.

The error detection/correction circuitry 126 may be implemented to decode data using error correction codes. The error detection/correction circuitry 126 may be implemented using various code decoders. For example, the error detection/correction circuitry 126 may be implemented with a decoder that performs non-systematic code decoding or a decoder that performs systematic code decoding.

For example, the error detection/correction circuit 126 may detect erroneous bits sector by sector for each read datum. That is, each read data may include a plurality of sectors. As used herein, a sector may refer to a unit of data that is smaller than a read unit (i.e., page) of flash memory. The sectors constituting each read data may correspond to each other via an address.

The error detection/correction circuit 126 may calculate a Bit Error Rate (BER) and determine whether correction is possible on a sector-by-sector basis. For example, if the BER is higher than the reference value, the error detection/correction circuit 126 may determine that the corresponding sector is uncorrectable or "failed". If the BER is below the reference value, the error detection/correction circuit 126 may determine that the corresponding sector is correctable or "pass".

The error detection/correction circuit 126 may sequentially perform error detection and correction operations for all the pieces of read data. When a sector in the read data is correctable, the error detection/correction circuit 126 may omit the error detection and correction operation related to the corresponding sector for the next read data. After completing the error detection and correction operations for all the pieces of read data in this manner, the error detection/correction circuit 126 can detect sectors that are deemed to be finally uncorrectable. There may be one or more sectors that are considered uncorrectable. Error detection/correction circuitry 126 may pass information (e.g., address information) about sectors that are deemed uncorrectable to processor 124.

Bus 127 may be configured to provide a channel between constituent elements of memory controller 120 (e.g., host interface 121, memory interface 122, processor 124, working memory 125, and error detection/correction circuitry 126). The bus 127 may include, for example, a control bus for transferring various control signals and a data bus for transferring various data.

The above-described constituent elements of the memory controller 120 are provided only as examples. One or more of these elements may be omitted and/or one or more of these elements may be integrated into a single element. Of course, as will be understood by those skilled in the art, memory controller 120 may contain one or more other elements in addition to the elements identified above.

Hereinafter, the memory device 110 is described in more detail with reference to fig. 2.

Fig. 2 is a block diagram schematically illustrating a memory device 110 according to an embodiment of the present disclosure.

Referring to fig. 2, the memory device 110 may include a memory cell array 210, an address decoder 220, a read/write circuit 230, a control logic 240, and a voltage generation circuit 250.

Memory cell array 210 may include a plurality of memory blocks BLK1-BLKz (where z is a natural number greater than or equal to 2).

In the plurality of memory blocks BLK1-BLKz, a plurality of word lines WL and a plurality of bit lines BL may be arranged, and a plurality of memory cells MC may be provided.

A plurality of memory blocks BLK1-BLKz may be connected to address decoder 220 by a plurality of word lines WL. The plurality of memory blocks BLK1-BLKz may be connected to the read/write circuit 230 through a plurality of bit lines BL.

Each of the plurality of memory blocks BLK1-BLKz may include a plurality of memory cells. For example, the plurality of memory cells are nonvolatile memory cells, and may include nonvolatile memory cells having a vertical channel structure.

The memory cell array 210 may be configured as a memory cell array having a two-dimensional structure, and in some cases, may be configured as a memory cell array having a three-dimensional structure.

Each of the plurality of memory cells in the memory cell array 210 may store at least one bit of data. For example, each of the plurality of memory cells may be a Single Layer Cell (SLC) configured to store one bit of data, a multi-layer cell (MLC) configured to store two bits of data, a Triple Layer Cell (TLC) configured to store three bits of data, or a Quad Layer Cell (QLC) configured to store four bits of data. For another example, memory cell array 210 may include a plurality of memory cells that may each be configured to store five or more bits of data.

In fig. 2, the address decoder 220, the read/write circuit 230, the control logic 240, and the voltage generation circuit 250 may function as peripheral circuits configured to drive the memory cell array 210.

The address decoder 220 may be connected to the memory cell array 210 through a plurality of word lines WL.

Address decoder 220 may be configured to operate in response to control by control logic 240.

Address decoder 220 may receive addresses through input/output buffers (not shown) internal to memory device 110. The address decoder 220 may be configured to decode a block address among the received addresses. The address decoder 220 may select at least one memory block according to the decoded block address.

The address decoder 220 may receive the read voltage Vread and the pass voltage Vpass from the voltage generation circuit 250.

During a read operation, the address decoder 220 may apply a read voltage Vread to a selected word line WL inside a selected memory block, and may apply a pass voltage Vpass to the remaining unselected word lines WL.

During a program verify operation, the address decoder 220 may apply a verify voltage generated by the voltage generation circuit 250 to the selected word line WL inside the selected memory block, and may apply the pass voltage Vpass to the remaining unselected word lines WL.

The address decoder 220 may be configured to decode a column address among the received addresses. The address decoder 220 may transmit the decoded column address to the read/write circuit 230.

Memory device 110 may perform read operations and program operations on a page-by-page basis. The address received when the read operation and the program operation are requested may include at least one of a block address, a row address, and a column address.

The address decoder 220 may select one memory block and one word line according to a block address and a row address. The column address may be decoded by address decoder 220 and provided to read/write circuit 230.

The address decoder 220 may include a block decoder, a row decoder, a column decoder, and/or an address buffer.

The read/write circuit 230 may include a plurality of page buffers PB. When the memory cell array 210 performs a read operation, the read/write circuit 230 may operate as a "read circuit", and when the memory cell array 210 performs a write operation, the read/write circuit 230 may operate as a "write circuit".

The read/write circuit 230 is also referred to as a page buffer circuit or a data register circuit including a plurality of page buffers PB. The read/write circuits 230 may include data buffers that participate in data processing functions and, in some cases, may further include cache buffers that operate with cache functions.

The plurality of page buffers PB may be connected to the memory cell array 210 through a plurality of bit lines BL. In order to sense the threshold voltage Vth of a memory cell during a read operation and a program verify operation, the plurality of page buffers PB may continuously supply a sensing current to the bit line BL connected to the memory cell, may sense a change in the amount of current flowing according to the programmed state of the corresponding memory cell through the sensing node, and may latch the change in the amount of current as sensing data.

The read/write circuit 230 may operate in response to a page buffer control signal output from the control logic 240.

During a read operation, the read/write circuits 230 sense DATA in the memory cells, temporarily store the retrieved DATA, and output the DATA DATA to input/output buffers of the memory device 110. In an embodiment, the read/write circuit 230 may include a column selection circuit in addition to the page buffer PB or the page register.

Control logic 240 may be connected to address decoder 220, read/write circuit 230, and voltage generation circuit 250. Control logic 240 may receive command CMD and control signal CTRL through an input/output buffer of memory device 110.

Control logic 240 may be configured to control overall operation of memory device 110 in response to control signal CTRL. The control logic 240 may output a control signal for adjusting the precharge potential level of the sensing nodes of the plurality of page buffers PB.

The control logic 240 may control the read/write circuits 230 to perform read operations in the memory cell array 210. The voltage generation circuit 250 may generate a read voltage Vread and a pass voltage Vpass used during a read operation in response to a voltage generation circuit control signal output from the control logic 240.

Fig. 3 is a diagram schematically illustrating a memory block BLK of the memory device 110 according to an embodiment of the present disclosure.

Referring to fig. 3, the memory block BLK may be arranged and configured in a direction in which the plurality of pages PG and the plurality of strings STR intersect.

The plurality of pages PG correspond to a plurality of word lines WL, and the plurality of strings STR correspond to a plurality of bit lines BL.

In the memory block BLK, a plurality of word lines WL and a plurality of bit lines BL may be arranged to intersect. For example, each of the plurality of word lines WL may be arranged in a row direction, and each of the plurality of bit lines BL may be arranged in a column direction. For another example, each of the plurality of word lines WL may be arranged in a column direction, and each of the plurality of bit lines BL may be arranged in a row direction.

The plurality of word lines WL and the plurality of bit lines BL may intersect each other, thereby defining a plurality of memory cells MC. Each memory cell MC may have a transistor TR disposed therein.

For example, the transistor TR may include a drain, a source, and a gate. The drain (or source) of the transistor TR may be connected to the corresponding bit line BL directly or via another transistor TR. The source (or drain) of the transistor TR may be connected to a source line (which may be ground) directly or via another transistor TR. The gate of the transistor TR may include a Floating Gate (FG) surrounded by an insulator and a Control Gate (CG) to which a gate voltage is applied from the word line WL.

In each of the plurality of memory blocks BLK1-BLKz, a first select line (also referred to as a source select line or a drain select line) may be additionally disposed outside a first outermost word line closer to the read/write circuit 230 among two outermost word lines, and a second select line (also referred to as a drain select line or a source select line) may be additionally disposed outside another second outermost word line.

In some cases, at least one dummy word line may be additionally disposed between the first outermost word line and the first select line. In addition, dummy word line(s) may be additionally disposed between the second outermost word line and the second select line.

In the case of the memory block structure as shown in fig. 3, a read operation and a program operation (i.e., a write operation) may be performed page by page, and an erase operation may be performed block by block.

Fig. 4 is a diagram illustrating the structure of word lines WL and bit lines BL of the memory device 110 according to an embodiment of the present disclosure.

Referring to fig. 4, the memory device 110 has a core region in which memory cells MC are concentrated and an auxiliary region corresponding to the remaining non-core region. The auxiliary area supports the operation of the memory cell array 210.

The kernel region may include a page PG and a string STR. In the core region, a plurality of word lines WL1-WL9 and a plurality of bit lines BL are arranged to intersect.

Word lines WL1-WL9 may be connected to row decoder 410. The bit line BL may be connected to the column decoder 420. A data register 430 corresponding to the read/write circuit 230 of fig. 2 may exist between the plurality of bit lines BL and the column decoder 420.

The plurality of word lines WL1-WL9 may correspond to the plurality of pages PG.

For example, each of the plurality of word lines WL1-WL9 may correspond to one page PG as shown in FIG. 4. In contrast, when the size of each of the plurality of word lines WL1-WL9 is large, each of the plurality of word lines WL1-WL9 may correspond to at least two (e.g., two or four) pages PG. Each page PG is a minimum unit related to performing a program operation and a read operation, and all memory cells MC within the same page PG can simultaneously perform operations when performing the program operation and the read operation.

A plurality of bit lines BL, which may include alternating odd-numbered bit lines and even-numbered bit lines, may be connected to the column decoder 420.

To access the memory cell MC, an address may be input to the core area first through the input/output terminal and then through the row decoder 410 and the column decoder 420, so that a corresponding target memory cell may be designated. As used herein, designating a target memory cell refers to accessing one of the memory cells MC at an intersection between a word line WL1-WL9 connected to the row decoder 410 and a bit line BL connected to the column decoder 420 to program data into or read programmed data from the one memory cell.

The page PG in the first direction (e.g., X-axis direction) is bound by a common line called a word line WL, and the strings STR in the second direction (e.g., Y-axis direction) are bound (connected) by a common line called a bit line BL. As used herein, co-binding refers to structural connection through the same material and receiving the same voltage simultaneously during the application of the voltage. Due to the voltage drop across the previous memory cell MC among the series-connected memory cells MC, the voltage applied to a memory cell MC among the memory cells MC may be slightly different from the voltage applied to another memory cell down the line.

Since all data processing of the memory device 110, including programming operations and read operations, occurs via the data register 430, the data register 430 plays an important role. If the data processing of the data register 430 is delayed, all other regions need to wait until the data register 430 completes the data processing. In addition, the performance degradation of the data register 430 may degrade the overall performance of the memory device 110.

In the example shown in fig. 4, in one string STR, there may be a plurality of transistors TR1 to TR9 connected to a plurality of word lines WL1 to WL 9. The region where the plurality of transistors TR1-TR9 exist corresponds to the memory cell MC. As used herein, the plurality of transistors TR1-TR9 refer to transistors that include a control gate CG and a floating gate FG.

The plurality of word lines WL1-WL9 includes two outermost word lines WL1 and WL 9. The first select line DSL may be additionally disposed outside a first outermost word line WL1 closer to the data register 430 in terms of signal path among the two outermost word lines WL1 and WL9, and the second select line SSL may be additionally disposed outside another second outermost word line WL 9.

The first selection transistor D-TR, which is controlled to be turned on/off by the first selection line DSL, has a gate electrode connected to the first selection line DSL, but does not include the floating gate FG. The second selection transistor S-TR, which is controlled to be turned on/off by the second selection line SSL, has a gate electrode connected to the second selection line SSL, but does not include the floating gate FG.

The first selection transistors D-TR function as switches that turn on or off the connection between the corresponding strings STR and the data register 430. The second selection transistor S-TR functions as a switch that turns on or off the connection between the corresponding string STR and the source line SL. That is, the first and second selection transistors D-TR and S-TR function as gatekeepers at opposite ends of the corresponding string STR and pass/block signals.

During a program operation, the memory system 100 requires a target memory cell MC to be programmed that fills the bit line BL with electrons. Accordingly, the memory system 100 applies an on-voltage Vcc to the gate electrode of the first selection transistor D-TR to turn on the first selection transistor D-TR, and applies an off-voltage (e.g., 0V) to the gate electrode of the second selection transistor S-TR to turn off the second selection transistor S-TR.

The memory system 100 turns on both the first and second select transistors D-TR and S-TR during a read operation or a verify operation. Accordingly, a current may flow through the corresponding string STR and to the source line SL corresponding to the ground, so that the voltage level of the bit line BL may be measured. However, during a read operation, there may be a time difference in the on/off between the first and second select transistors D-TR and S-TR.

The memory system 100 may supply a voltage (e.g., +20V) to the substrate through the source line SL during an erase operation. The memory system 100 floats both the first select transistor D-TR and the second select transistor S-TR during an erase operation, thereby creating an infinite resistance. Thus, the effect of the first selection transistor D-TR and the second selection transistor S-TR can be eliminated and electrons can only be operated between the floating gate FG and the substrate due to the potential difference.

Fig. 5 is a diagram schematically illustrating an operation of the memory system 100 of fig. 1 according to an embodiment of the present disclosure.

Referring to fig. 5, the memory controller 120 of the memory system 100 may include a mapping table MAP _ TBL including mapping information between logical addresses and physical addresses. The memory controller 120 may allocate a portion of the mapping CACHE region MAP _ CACHE _ AREA for the mapping segment MAP _ SEG of the CACHE mapping table MAP _ TBL as the mapping UPDATE region MAP _ UPDATE _ AREA in order to UPDATE the mapping table MAP _ TBL.

The mapping table MAP _ TBL may be on the working memory 125 of the memory controller 120. The mapping table MAP _ TBL may be loaded from the memory device 110 when the memory system 100 is booted.

In an embodiment, the MAP CACHE AREA MAP _ CACHE _ AREA may also be located on the working memory 125 of the memory controller 120, like the MAP table MAP _ TBL. Alternatively, in another embodiment, the MAP CACHE AREA MAP _ CACHE _ AREA may be on a different volatile memory (e.g., TCM, SRAM, DRAM, or SDRAM) than the working memory 125.

The mapping table MAP _ TBL may include a plurality of mapping segments MAP _ SEG, and each of the plurality of mapping segments MAP _ SEG may include a plurality of pieces of mapping information. Each piece of mapping information may indicate a specific physical address PA mapped to a specific logical address LA. In the illustrated example, the mapping information may indicate that the logical address LA 0 and the physical address PA 100 are mapped to each other. As another example, the mapping information may indicate that the logical address LA 2 and the physical address PA 103 are mapped to each other.

The mapping information cached in the mapping segment MAP _ SEG in the mapping CACHE region MAP _ CACHE _ AREA may be changed by a program/erase operation, a background operation (e.g., garbage collection), or the like. Memory controller 120 may update mapping table MAP _ TBL to reflect changes in mapping segment MAP _ SEG cached in mapping CACHE region MAP _ CACHE _ AREA.

In this case, the memory controller 120 may previously allocate a part of the MAP CACHE region MAP _ CACHE _ AREA as the MAP UPDATE region MAP _ UPDATE _ AREA before updating the MAP _ TBL. The MAP UPDATE region MAP _ UPDATE _ AREA is a region for updating the MAP _ TBL.

Pre-allocating a portion of the map cache area prior to updating the mapping table may increase the speed at which the update is performed. This is due to the fact that: if the mapping segment MAP _ SEG to be updated is loaded before the memory controller 120 updates the mapping table MAP _ TBL, it is possible to reduce the time taken to retrieve the mapping segment.

In addition, the memory controller 120 may load a subset of the above-described mapping segments MAP _ SEG to the mapping UPDATE region MAP _ UPDATE _ AREA in order to UPDATE the mapping table MAP _ TBL. The subset may include some or all of the mapping segments MAP _ SEG in the mapping table MAP _ TBL.

For example, if it is determined that the mapping table MAP _ TBL is to be updated, the memory controller 120 may load all or some of the mapping segments MAP _ SEG in the mapping table MAP _ TBL to the mapping UPDATE AREA MAP _ UPDATE _ AREA before updating the mapping table MAP _ TBL.

In this case, the time when the memory controller 120 allocates a portion of the MAP CACHE AREA MAP _ CACHE _ AREA as the MAP UPDATE AREA MAP _ UPDATE _ AREA may be, for example, a time when the number of writable pages in the open memory block for processing the write command received from the host is less than or equal to a threshold number of pages. Hereinafter, this will be described in detail with reference to fig. 6 and 7.

Fig. 6 is a diagram illustrating a configuration of an OPEN memory block OPEN _ MEM _ BLK according to an embodiment of the present disclosure.

Referring to fig. 6, in response to a write command received from the host, the memory controller 120 may control the memory device 110 to store write data received from the host in units of pages in the OPEN memory block OPEN _ MEM _ BLK. The OPEN memory block OPEN _ MEM _ BLK may include a plurality of pages, and each of the plurality of pages may be an occupied page in which data is stored or may be a writable page in which data is not stored.

The data stored in each page in the OPEN memory block OPEN _ MEM _ BLK may correspond to a specific logical address value. For example, in the OPEN memory block OPEN _ MEM _ BLK shown in fig. 6, the data stored in page #0 corresponds to logical address LA 0, the data stored in page #1 corresponds to logical address LA 100, the data stored in page #2 corresponds to logical address LA 512, and the data stored in page #3 corresponds to logical address LA 1025. In addition, pages #4 to #29 are writable pages where no data is stored.

Fig. 7 is a flowchart illustrating an example of an operation of the memory system 100 of fig. 1 to allocate the MAP UPDATE AREA MAP _ UPDATE _ AREA according to an embodiment of the present disclosure.

Referring to fig. 7, the memory controller 120 of the memory system 100 may calculate the number of writable pages in an OPEN memory block OPEN _ MEM _ BLK denoted as a (S710). In the example of fig. 6, the number of writable pages, i.e., a, in the OPEN memory block OPEN _ MEM _ BLK is 26, i.e., the writable pages include pages 4 to 29.

The memory controller 120 determines whether a is less than or equal to a threshold number of pages (e.g., 10 or 20) (S720).

If A is less than or equal to the threshold number of pages (YES at S720), this means that the mapping table MAP _ TBL is about to be updated. Accordingly, the memory controller 120 may allocate a portion of the MAP CACHE region MAP _ CACHE _ AREA as the MAP UPDATE region MAP _ UPDATE _ AREA (S730).

On the other hand, if a exceeds the threshold number of pages (no in S720), this means that the mapping table MAP _ TBL is not updated immediately. Therefore, the memory controller 120 does not allocate the MAP UPDATE AREA MAP _ UPDATE _ AREA in the MAP CACHE AREA MAP _ CACHE _ AREA to continue using the MAP CACHE AREA MAP _ CACHE _ AREA only for the CACHE (S740).

Hereinafter, the following operations are described: when a part of the MAP CACHE region MAP _ CACHE _ AREA is allocated as the MAP UPDATE region MAP _ UPDATE _ AREA, a sub-region to be allocated as the MAP UPDATE region MAP _ UPDATE _ AREA is selected from among regions in the MAP CACHE region MAP _ CACHE _ AREA.

Fig. 8 is a diagram illustrating an example of mapping a CACHE region MAP _ CACHE _ AREA according to an embodiment of the present disclosure.

In some embodiments, the MAP CACHE AREA MAP _ CACHE _ AREA may be divided into a plurality of sub-AREAs. Specifically, the MAP CACHE AREA MAP _ CACHE _ AREA may be divided into 1) an empty sub-AREA where the MAP segment MAP _ SEG is not cached and 2) a sub-AREA where the MAP segment MAP _ SEG is cached.

In the example shown in fig. 8, the mapping CACHE region MAP _ CACHE _ AREA may be divided into ten sub-regions, where sub-regions #0, #1, #2, and #3 are empty sub-regions, and sub-regions #4, #5, #6, #7, #8, and #9 are sub-regions in which the mapping segment MAP _ SEG is cached.

In some embodiments, memory controller 120 may calculate a hit count for each of the mapping segments MAP _ SEG cached in the mapping CACHE region MAP _ CACHE _ AREA during a first period of time (e.g., 500 ms). Each time any one of the mapping segments MAP _ SEG cached in the mapping CACHE region MAP _ CACHE _ AREA is referenced (hit) by a read command or a write command received from the host during the first period, the hit count of the referenced mapping segment MAP _ SEG may be increased by one.

In the example shown in fig. 8, the hit counts of six mapping segments MAP _ SEG cached in the mapping CACHE region MAP _ CACHE _ AREA are 1, 100, 5, 10, 8, and 15, respectively. Specifically, the hit count of the mapped segment cached in the sub-region #4 is 1, the hit count of the mapped segment cached in the sub-region #5 is 100, the hit count of the mapped segment cached in the sub-region #6 is 5, the hit count of the mapped segment cached in the sub-region #7 is 10, the hit count of the mapped segment cached in the sub-region #8 is 8, and the hit count of the mapped segment cached in the sub-region #9 is 15.

Fig. 9 is a diagram illustrating an example of a sub-AREA of the MAP CACHE AREA MAP _ CACHE _ AREA that may be allocated as the MAP UPDATE AREA MAP _ UPDATE _ AREA.

In some embodiments, the memory controller 120 may allocate all or some of the following as the MAP UPDATE region MAP _ UPDATE _ AREA: 1) a sub-region of the uncached mapped SEGMENT MAP _ SEGMENT, or 2) a sub-region of N Least Recently Used (LRU) mapped SEGMENTs MAP _ SEG, whose hit count per first time period is less than a threshold hit count THR _ CNT, where N is a natural number. Here, "least recently used map segment" refers to the oldest, least recently used (i.e., missed) map segment among all used map segments.

That is, a sub-region in the MAP UPDATE region MAP _ UPDATE _ AREA may be 1) a sub-region that does not cache the MAP SEGMENT MAP _ SEGMENT, or 2) a sub-region that caches any one of the N least recently used MAP SEGMENTs MAP _ SEG whose hit count per first period is less than the threshold hit count THR _ CNT.

In the example shown in fig. 9, it is assumed that the six sub-regions that cache the mapping segment MAP _ SEG have recently hit in the following order: 1) sub-region #4 of the mapped segment having the hit count of 1 is cached, 2) sub-region #5 of the mapped segment having the hit count of 100 is cached, 3) sub-region #6 of the mapped segment having the hit count of 5 is cached, 4) sub-region #7 of the mapped segment having the hit count of 10 is cached, and 5) sub-region #8 of the mapped segment having the hit count of 8 is cached. In addition, assume that the threshold hit count THR _ CNT is 15 and N is 3.

The mapping UPDATE region MAP _ UPDATE _ AREA may include four sub-regions of the uncached mapping segment MAP _ SEG.

In addition, a sub-AREA satisfying the above condition among six sub-AREAs in which the mapping segment MAP _ SEG is cached may be included in the mapping UPDATE AREA MAP _ UPDATE _ AREA.

A sub-AREA in which the mapped segment having the hit count of 1 is cached may be included in the MAP UPDATE AREA MAP _ UPDATE _ AREA.

Since the hit count 100 is greater than the threshold hit count 15, the sub-region where the MAP segment with the hit count of 100 is cached cannot be included in the MAP UPDATE region MAP _ UPDATE _ AREA. Since a mapped segment with a hit count greater than or equal to the threshold hit count is more likely to be referenced in the future, it is preferable to keep the mapped segment to be cached later.

A sub-AREA in which the mapped segment having the hit count of 5 is cached may be included in the MAP UPDATE AREA MAP _ UPDATE _ AREA.

A sub-AREA in which the mapped segment having the hit count of 10 is cached may be included in the MAP UPDATE AREA MAP _ UPDATE _ AREA.

The sub-AREA where the mapping segment having the hit count of 8 is cached cannot be included in the mapping UPDATE AREA MAP _ UPDATE _ AREA. This is due to the fact that: although hit count 8 is less than threshold hit count 15, there is a sub-region where three mapped segments are cached, each having a hit count less than the threshold hit count and each having an earlier recent hit.

Since the hit count 15 is not less than the threshold hit count 15, the sub-AREA in which the MAP segment having the hit count of 15 is cached cannot be included in the MAP UPDATE AREA MAP _ UPDATE _ AREA.

Accordingly, the mapping UPDATE region MAP _ UPDATE _ AREA may be allocated based on a total of seven sub-regions including the empty sub-regions (i.e., the sub-regions #0, #1, #2, and #3) and the sub-regions #4, #6, and #7 among the six sub-regions in which the mapping segment MAP _ SEG is cached. For example, the allocated mapping UPDATE region MAP _ UPDATE _ AREA may include all the seven sub-regions described above, or may include only some of the seven sub-regions (e.g., four sub-regions or five sub-regions).

Hereinafter, a process of loading the mapping segment MAP _ SEG to the mapping UPDATE region MAP _ UPDATE _ AREA allocated for updating the mapping table MAP _ TBL is described.

Fig. 10 is a diagram illustrating an example of a mapping section loaded to the mapping UPDATE region MAP _ UPDATE _ AREA.

In the example shown in fig. 10, it is assumed that data corresponding to the logical addresses LA 0, LA 100, LA 512, and LA 1025 have been written to the OPEN memory block OPEN _ MEM _ BLK of fig. 6. In addition, it is assumed that each MAP segment MAP _ SEG includes 512 pieces of mapping information between logical addresses and physical addresses.

In this case, a target mapping segment, which is a mapping segment including a logical address of data written to the OPEN memory block OPEN _ MEM _ BLK, may be selected as follows.

First, a mapping segment #0MAP _ SEG #0 including mapping information on logical addresses 0 to 511 may be selected as a target mapping segment. Mapping segment #0MAP _ SEG #0 includes logical address 0 and logical address 100.

In addition, the mapping segment #1MAP _ SEG #1 including mapping information on the logical addresses 512 to 1023 may also be selected as a target mapping segment. Mapping segment #1MAP _ SEG #1 includes logical address 512.

In addition, the mapping segment #2MAP _ SEG #2 including mapping information on logical addresses 1024 to 1535 may also be selected as a target mapping segment. Mapping segment #2MAP _ SEG #2 includes a logical address 1025.

The memory controller 120 may manage the LIST MAP _ SEG _ LIST of target mapping segments so as to quickly load the above four target mapping segments into the mapping UPDATE AREA MAP _ UPDATE _ AREA.

Memory controller 120 may use any of a variety of data structures to manage the LIST of target mapped segments, MAP _ SEG _ LIST. For example, the memory controller 120 may manage the LIST MAP _ SEG _ LIST of target mapped segments in the form of an array. The memory controller 120 sequentially accesses a target mapping segment in the LIST of target mapping segments MAP _ SEG _ LIST and loads the target mapping segment into the mapping UPDATE region MAP _ UPDATE _ AREA. In this case, if the LIST MAP _ SEG _ LIST of target MAP segments is managed in the form of an array, the respective target MAP segments can be quickly accessed.

When data is written to the OPEN memory block OPEN _ MEM _ BLK, the memory controller 120 may recognize a logical address corresponding to the written data, and may add information on the MAP segment MAP _ SEG (including mapping information corresponding to the above logical address) to the LIST MAP _ SEG _ LIST of target MAP segments.

However, if information on a mapped segment (including mapping information corresponding to a logical address) has been added to the LIST MAP _ SEG _ LIST of target mapped segments, the memory controller 120 may omit the above-described addition operation.

For example, if logical address LA 0 corresponds to data written to OPEN memory block OPEN _ MEM _ BLK, memory controller 120 may add information regarding mapping segment #0MAP _ SEG #0 (including mapping information corresponding to logical address 0) to the LIST of target mapping segments MAP _ SEG _ LIST. Thereafter, if the logical address 100 corresponds to data written in the OPEN memory block OPEN _ MEM _ BLK, since information on the mapping segment #0MAP _ SEG #0 (including mapping information corresponding to the logical address 100) has been added to the LIST of target mapping segments MAP _ SEG _ LIST, the memory controller 120 may omit an operation of adding information on the mapping segment #0MAP _ SEG #0 to the LIST of target mapping segments MAP _ SEG _ LIST.

In addition, if the logical address LA 512 corresponds to data written into the OPEN memory block OPEN _ MEM _ BLK, the memory controller 120 may add information on the mapping segment #1MAP _ SEG #1 (including mapping information corresponding to the logical address LA 512) to the LIST MAP _ SEG _ LIST of target mapping segments.

For example, if logical address LA 1025 corresponds to data written to OPEN memory block OPEN _ MEM _ BLK, memory controller 120 may add information about mapped segment #2MAP _ SEG #2 (including mapping information corresponding to logical address LA 1025) to the LIST of target mapped segments MAP _ SEG _ LIST.

Thereafter, when the mapping segment MAP _ SEG is loaded into the mapping UPDATE region MAP _ UPDATE _ AREA in order to UPDATE the mapping table MAP _ TBL, the memory controller 120 may retrieve the mapping segment #0MAP _ SEG #0, the mapping segment #1MAP _ SEG #1, and the mapping segment #2MAP _ SEG #2 from the LIST MAP _ SEG _ LIST of target mapping segments, and may load it into the mapping UPDATE region MAP _ UPDATE _ AREA.

Even after the mapping UPDATE region MAP _ UPDATE _ AREA is allocated, the memory controller 120 may change the size of the mapping UPDATE region MAP _ UPDATE _ AREA. After analyzing the workload, the memory controller 120 may change the size of the MAP UPDATE AREA MAP _ UPDATE _ AREA according to the workload.

For example, the workload may be defined as a ratio of write commands received from the host to all commands during the second period (e.g., 2 seconds), and if the ratio is greater than or equal to the threshold ratio, the memory controller 120 may change the size of the MAP UPDATE region MAP _ UPDATE _ AREA by comparing a hit rate of the MAP cache region with a reference hit rate. Hereinafter, the operation will be described in detail with reference to fig. 11.

Fig. 11 is a flowchart illustrating an example of an operation of the memory system 100 of fig. 1 to change the size of the MAP UPDATE AREA MAP _ UPDATE _ AREA according to an embodiment of the present disclosure.

Referring to fig. 11, the memory controller 120 of the memory system 100 may calculate a ratio of write commands received from the host during the second period of time to all commands (S1110), which is denoted as B. Here, the ratio B may be obtained by a ratio of the number of write commands received from the host during the second period of time to all commands received from the host during the second period of time. For example, if 50 write commands and 30 read commands are received from the HOST during the second period of time, B is 50/(30+50) ═ 62.5%.

The memory controller 120 determines whether the ratio B calculated in step S1110 is greater than or equal to a threshold ratio (S1120). If the ratio B is less than the threshold ratio (no in S1120), the memory controller 120 does not perform an operation of changing the size of the MAP UPDATE region MAP _ UPDATE _ AREA. This is due to the fact that: the ratio B being smaller than the threshold ratio means that a small number of write operations are received, and in this case, since the mapping table MAP _ TBL varies little, the probability that the mapping table MAP _ TBL will be updated is low.

On the other hand, if the ratio B is greater than or equal to the threshold ratio (yes in S1120), the memory controller 120 may preferentially calculate the hit rate C of the MAP CACHE region MAP _ CACHE _ AREA so as to change the size of the MAP UPDATE region MAP _ UPDATE _ AREA (S1130).

The hit rate C of the MAP CACHE AREA MAP _ CACHE _ AREA may be calculated as a ratio of the number of pieces of MAP information that hit (i.e., are successfully retrieved) from the MAP CACHE AREA MAP _ CACHE _ AREA during the third period (e.g., 1 second) to the total number of times MAP information is retrieved from the MAP CACHE AREA MAP _ CACHE _ AREA. For example, in the case where 100 pieces of mapping information are retrieved during the third period, if 40 pieces of mapping information are hit, and if 60 pieces of mapping information are not hit, the hit rate C is calculated as 40/100-40%.

The memory controller 120 may compare the value C calculated in step S1130 with the reference hit rate, thereby changing the size of the MAP UPDATE AREA MAP _ UPDATE _ AREA (S1140). A specific example in which the memory controller 120 changes the size of the MAP UPDATE AREA MAP _ UPDATE _ AREA is described in detail below with reference to fig. 13.

The above-described reference hit rate may be fixed or may be dynamically changed according to the size of the MAP UPDATE AREA MAP _ UPDATE _ AREA.

Fig. 12 is a diagram illustrating an example of the memory system 100 of fig. 1 changing a reference hit rate according to an embodiment of the present disclosure.

Referring to fig. 12, it is assumed that the total size of the MAP CACHE AREA MAP _ CACHE _ AREA is 100 and the reference hit rate is set to 20% before allocating or allocating a portion of the MAP CACHE AREA as the MAP UPDATE AREA MAP _ UPDATE _ AREA.

If the size of a portion allocated as a MAP update AREA in the MAP CACHE AREA MAP _ CACHE _ AREA of size 100 is 20, the reference hit rate may be changed from 20% to 25%. This is due to the fact that: the CACHE performance of the mapping CACHE region MAP _ CACHE _ AREA when the size for the CACHE is 100 and the hit rate is 20% in the mapping CACHE region MAP _ CACHE _ AREA is the same as the CACHE performance of the mapping CACHE region MAP _ CACHE _ AREA when the size for the CACHE is 80(═ 100-20) and the hit rate is 25% in the mapping CACHE region MAP _ CACHE _ AREA.

That is, when the size of the MAP UPDATE AREA MAP _ UPDATE _ AREA increases, the size for the CACHE in the MAP CACHE AREA MAP _ CACHE _ AREA decreases, and thus in order for the MAP CACHE AREA MAP _ CACHE _ AREA to ensure the same CACHE performance, the hit rate needs to be increased. Therefore, when the size of the MAP UPDATE AREA MAP _ UPDATE _ AREA increases, the reference hit rate increases, and when the size of the MAP UPDATE AREA MAP _ UPDATE _ AREA decreases, the reference hit rate decreases. That is, the reference hit rate may be changed in proportion to the size of the MAP UPDATE AREA MAP _ UPDATE _ AREA.

Fig. 13 is a diagram illustrating an example in which the memory system 100 of fig. 1 changes the size of the mapping UPDATE region MAP _ UPDATE _ AREA by comparing the hit rate of the mapping CACHE region MAP _ CACHE _ AREA with a reference hit rate according to an embodiment of the present disclosure.

In fig. 13, it is assumed that the total size of the MAP CACHE region MAP _ CACHE _ AREA is 100 and the reference hit rate is 25%, wherein 20 of the total size is allocated as the MAP UPDATE region MAP _ UPDATE _ AREA.

In the example shown at the top of fig. 13, it is assumed that the hit rate of the MAP CACHE AREA MAP _ CACHE _ AREA is 30%, which is greater than the reference hit rate 25%. In this case, even if more MAP CACHE regions MAP _ CACHE _ AREA are allocated as the MAP UPDATE regions MAP _ UPDATE _ AREA, it is possible to ensure that the CACHE performance reaches the reference hit rate. Accordingly, the memory controller 120 of the memory system 100 may increase the size of the MAP UPDATE AREA MAP _ UPDATE _ AREA from 20 to 25.

In the example shown at the bottom of fig. 13, it is assumed that the hit rate of the MAP CACHE region MAP _ CACHE _ AREA is 20%, which is less than the reference hit rate 25%. In this case, in order to increase the hit rate of the MAP CACHE region MAP _ CACHE _ AREA to be close to the reference hit rate, a region for CACHE is further required. Accordingly, the memory controller 120 of the memory system 100 may reduce the size of the MAP UPDATE AREA MAP _ UPDATE _ AREA from 20 to 15.

As described above, the UPDATE performance of the mapping table MAP _ TBL may be optimized within a limit that ensures that the CACHE performance of the mapping table MAP _ TBL is maintained at a set level or higher by changing the size of the mapping UPDATE region MAP _ UPDATE _ AREA to satisfy the reference hit rate of the mapping CACHE region MAP _ CACHE _ AREA.

Fig. 14 is a flowchart illustrating a method of operation of the memory system 100 of fig. 1 in accordance with an embodiment of the present disclosure.

Referring to fig. 14, the operating method of the memory system 100 may include the steps of: a portion of the mapping CACHE region MAP _ CACHE _ AREA for the plurality of mapping segments MAP _ SEG in the CACHE mapping table MAP _ TBL is allocated as a mapping UPDATE region MAP _ UPDATE _ AREA for updating the mapping table MAP _ TBL (S1410). This step is performed before updating the mapping table MAP _ TBL including mapping information between logical addresses and physical addresses.

In this case, if the number of writable pages in the OPEN memory block OPEN _ MEM _ BLK for processing the write command received from the HOST is less than or equal to the threshold number of pages, a portion of the MAP CACHE AREA MAP _ CACHE _ AREA may be allocated as the MAP UPDATE AREA MAP _ UPDATE _ AREA.

In addition, the method of operation of the memory system 100 may include the steps of: a subset of the mapping segment MAP _ SEG is loaded to the mapping UPDATE AREA MAP _ UPDATE _ AREA (S1420).

In this case, the sub-region in the MAP UPDATE region MAP _ UPDATE _ AREA may be, for example, 1) a sub-region in which the MAP segment MAP _ SEG is not cached, or 2) a sub-region in which any one of N least recently used MAP segments MAP _ SEG is cached, the hit count of the MAP segment MAP _ SEG per first period being less than the threshold hit count, where N is a natural number.

In addition, when a ratio of write commands received from the host to all commands during the second period is greater than or equal to a threshold ratio, the size of the MAP UPDATE AREA MAP _ UPDATE _ AREA may be changed based on a result of comparing the hit rate of the MAP CACHE AREA MAP _ CACHE _ AREA with the reference hit rate.

The above-described operations of the memory controller 120 may be controlled by the control circuit 123, and the processor 124 may be executed in a manner that the various operations of the memory controller 120 run (or drive) the programmed firmware.

Fig. 15 is a diagram showing a configuration of a computing system 1500 according to an embodiment of the present disclosure.

Referring to fig. 15, computing system 1500 may include: memory system 100, electrically connected to system bus 1560; a Central Processing Unit (CPU)1510 configured to control overall operation of the computing system 1500; a Random Access Memory (RAM)1520 configured to store data and information related to the operation of computing system 1500; a user interface/user experience (UI/UX) module 1530 configured to provide a user environment to a user; a communication module 1540 configured to communicate with an external device in a wired and/or wireless manner; and a power management module 1550 configured to manage power used by the computing system 1500.

The computing system 1500 may be a Personal Computer (PC) or may include a mobile terminal such as a smartphone, tablet, or any of a variety of other electronic devices.

The computing system 1500 may further include a battery for supplying operating voltage, an application chipset, a graphics-related module, a camera image processor, and Dynamic Random Access Memory (DRAM). Of course, as will be appreciated by those skilled in the art, computing system 1500 may include other elements.

The memory system 100 may include devices configured to store data in magnetic disks, such as Hard Disk Drives (HDDs), and/or devices configured to store data in non-volatile memory, such as Solid State Drives (SSDs), universal flash devices, or embedded mmc (emmc) devices. Non-volatile memory may include Read Only Memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), flash memory, phase change RAM (PRAM), Magnetic RAM (MRAM), Resistive RAM (RRAM), Ferroelectric RAM (FRAM), and the like. In addition, the memory system 100 may be implemented as any of various types of memory devices installed inside any of various electronic devices.

According to the embodiments of the present disclosure described above, the operation delay time of the memory system can be minimized. In addition, according to an embodiment of the present disclosure, overhead generated in the process of calling a specific function may be minimized. Although various embodiments of the present disclosure have been illustrated and described, it will be appreciated by those skilled in the art that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the disclosure as set forth in the accompanying claims. That is, this disclosure covers all modifications and variations of any disclosed embodiment falling within the scope of the appended claims.

32页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:存储设备和固态驱动设备及其操作方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类