Memory system, method of operating memory system, and data processing system

文档序号:1003493 发布日期:2020-10-23 浏览:25次 中文

阅读说明:本技术 存储器系统、操作存储器系统的方法以及数据处理系统 (Memory system, method of operating memory system, and data processing system ) 是由 姜寭美 于 2019-12-26 设计创作,主要内容包括:本公开涉及存储器系统、操作存储器系统的方法以及数据处理系统。存储器系统包括:适用于存储映射信息的存储器设备;以及控制器,适用于将映射信息的一部分存储在映射高速缓存中,并且基于存储在映射高速缓存中的映射信息来访问存储器设备,或基于与来自主机的访问请求一起选择性地被提供的物理地址来访问存储器设备,其中映射高速缓存包括适用于存储与写入命令对应的映射信息的写入映射高速缓存,以及适用于存储与读取命令对应的映射信息的读取映射高速缓存,并且其中控制器向主机提供从读取映射高速缓存输出的映射信息。(The present disclosure relates to memory systems, methods of operating memory systems, and data processing systems. The memory system includes: a memory device adapted to store mapping information; and a controller adapted to store a portion of the mapping information in a mapping cache and to access the memory device based on the mapping information stored in the mapping cache or based on a physical address selectively provided with an access request from the host, wherein the mapping cache includes a write mapping cache adapted to store mapping information corresponding to a write command and a read mapping cache adapted to store mapping information corresponding to a read command, and wherein the controller provides the mapping information output from the read mapping cache to the host.)

1. A memory system, comprising:

a memory device adapted to store mapping information; and

a controller adapted to store a portion of the mapping information in a mapping cache and to access the memory device based on the mapping information stored in the mapping cache or based on a physical address selectively provided with an access request from a host,

wherein the mapping cache comprises a write mapping cache adapted to store mapping information corresponding to write commands and a read mapping cache adapted to store mapping information corresponding to read commands, and

wherein the controller provides the mapping information output from the read mapping cache to the host.

2. The memory system of claim 1, wherein the write map cache and the read map cache store mapping information based on a Least Recently Used (LRU) scheme, and

wherein the controller moves first mapping information stored in the write mapping cache to the read mapping cache when the first mapping information is accessed in response to the read command, and moves second mapping information stored in the read mapping cache to the write mapping cache when the second mapping information is accessed in response to the write command.

3. The memory system according to claim 1, wherein when there is no space in the read map cache to store target map information, the controller outputs to the outside the least recently accessed map information among the map information stored in the read map cache, stores the target map information in the read map cache as the most recently used map information, and provides the output map information to the host.

4. The memory system according to claim 1, wherein when there is no space in the write map cache to store target map information, the controller outputs to the outside the least recently accessed map information of the map information stored in the write map cache, stores the target map information in the write map cache as the most recently used map information, and deletes the output map information.

5. The memory system according to claim 1, wherein when target mapping information corresponding to the read command is stored in the read mapping cache, the controller stores the target mapping information in the read mapping cache as most recently used mapping information.

6. The memory system according to claim 1, wherein when target mapping information corresponding to the write command is stored in the write mapping cache, the controller stores the target mapping information in the write mapping cache as most recently used mapping information.

7. The memory system according to claim 1, wherein when there is no space in the read map cache to store target map information and the target map information corresponding to the read command is stored in the write map cache, the controller outputs least recently used map information among the map information stored in the read map cache to the outside, stores the target map information moved from the write map cache in the read map cache as most recently used map information, and provides the output map information to the host.

8. The memory system according to claim 1, wherein when there is no space in the write map cache to store target map information and target map information corresponding to the write command is stored in the read map cache, the controller outputs least recently used map information among the map information stored in the write map cache to the outside, stores the target map information moved from the read map cache in the write map cache as most recently used map information, and deletes the output map information.

9. The memory system of claim 1, wherein the write map cache and the read map cache store mapping information based on a Least Frequently Used (LFU) scheme, and

wherein the controller moves first mapping information stored in the write mapping cache to the read mapping cache when the first mapping information is accessed in response to the read command, and moves second mapping information stored in the read mapping cache to the write mapping cache when the second mapping information is accessed in response to the write command.

10. A method for operating a memory system, the method comprising:

storing the mapping information in a memory device; and

storing a portion of the mapping information in a mapping cache;

accessing the memory device based on the mapping information stored in the mapping cache or based on a physical address selectively provided with an access request from a host;

storing mapping information corresponding to a write operation in a write mapping cache when the access request is for the write operation and storing mapping information corresponding to a read operation in a read mapping cache when the access request is for the read operation; and

providing mapping information output from the read mapping cache to the host.

11. The method of claim 10, wherein the write map cache and the read map cache operate based on a Least Recently Used (LRU) scheme, and further comprising:

moving first mapping information stored in the write mapping cache to the read mapping cache when the first mapping information is accessed in response to a read command; and

moving second mapping information stored in the read mapping cache to the write mapping cache when the second mapping information is accessed in response to a write command.

12. The method of claim 10, wherein when there is no space in the read map cache to store target map information, the method further comprises:

outputting to the outside mapping information that is least recently used among the mapping information stored in the read mapping cache;

storing the target mapping information in the read mapping cache as most recently used mapping information; and

providing the outputted mapping information to the host.

13. The method of claim 10, wherein when there is no space in the write map cache to store target map information, the method further comprises:

outputting to the outside mapping information that is least recently used among the mapping information stored in the write mapping cache;

storing the target mapping information in the write mapping cache as most recently used mapping information; and

the outputted mapping information is deleted.

14. The method of claim 10, further comprising:

when target mapping information corresponding to a read command is stored in the read mapping cache,

storing the target mapping information in the read mapping cache as the most recently used mapping information.

15. The method of claim 10, further comprising:

when target mapping information corresponding to a write command is stored in the write mapping cache,

storing the target mapping information in the write mapping cache as the most recently used mapping information.

16. The method of claim 10, further comprising:

when there is no space in the read map cache to store target map information and the target map information corresponding to a read command is stored in the write map cache,

outputting to the outside mapping information that is least recently used among the mapping information stored in the read mapping cache;

storing the target mapping information moved from the write mapping cache in the read mapping cache as most recently used mapping information; and

providing the outputted mapping information to the host.

17. The method of claim 10, further comprising:

when there is no space in the write map cache to store target map information and the target map information corresponding to a write command is stored in the read map cache,

outputting to the outside mapping information that is least recently used among the mapping information stored in the write mapping cache;

storing the target mapping information moved from the read mapping cache in the write mapping cache as most recently used mapping information; and

the outputted mapping information is deleted.

18. The method of claim 10, wherein the write map cache and the read map cache operate based on a Least Frequently Used (LFU) scheme, and further comprising:

moving first mapping information stored in the write mapping cache to the read mapping cache when the first mapping information is accessed in response to a read command; and

moving second mapping information stored in the read mapping cache to the write mapping cache when the second mapping information is accessed in response to a write command.

19. A data processing system comprising:

a memory system adapted to store mapping information and to access user data based on the mapping information; and

a host adapted to receive the mapping information from the memory system, store the received mapping information in a host memory, and provide an access request to the memory system based on the mapping information,

wherein the memory system comprises:

a mapping cache; and

a controller adapted to store a portion of the mapping information,

wherein the mapping cache comprises a write mapping cache adapted to store mapping information corresponding to write commands and a read mapping cache adapted to store mapping information corresponding to read commands, and

wherein the controller provides the mapping information output from the read mapping cache to the host.

20. The data processing system of claim 19, wherein the write map cache and the read map cache store mapping information based on a Least Recently Used (LRU) scheme, and

wherein the controller moves first mapping information stored in the write mapping cache to the read mapping cache when the first mapping information is accessed in response to the read command, and moves second mapping information stored in the read mapping cache to the write mapping cache when the second mapping information is accessed in response to the write command.

Technical Field

Embodiments of the present disclosure relate to memory systems and methods of operating memory systems, and more particularly, to an apparatus and method for providing mapping information from a memory system included in a data processing system to a host or computing device.

Background

Computer environment paradigms have shifted to ubiquitous computing, which enables computing systems to be used anytime and anywhere. As a result, the use of portable electronic devices such as mobile phones, digital cameras, and laptop computers has increased rapidly. These portable electronic devices each use a memory system having one or more memory devices to store data. The memory system may be used as a primary memory system or a secondary memory system for the portable electronic device.

Since memory systems have no mechanical driving parts, they provide advantages such as excellent stability and durability, high information access speed, and low power consumption. Examples of memory systems having such advantages include Universal Serial Bus (USB) memory devices, memory cards with various interfaces, Solid State Drives (SSDs), and the like.

Disclosure of Invention

Embodiments of the present disclosure are directed to a memory system capable of efficiently processing data by using resources of a host, and a method of operating the memory system.

According to one embodiment of the present invention, a memory system includes: a memory device adapted to store mapping information; and a controller adapted to store a portion of the mapping information in a mapping cache and to access the memory device based on the mapping information stored in the mapping cache or based on a physical address selectively provided with an access request from the host, wherein the mapping cache includes a write mapping cache adapted to store mapping information corresponding to a write command and a read mapping cache adapted to store mapping information corresponding to a read command, and wherein the controller provides the mapping information output from the read mapping cache to the host.

According to another embodiment of the present invention, a method for operating a memory system includes: storing the mapping information in a memory device; and storing a portion of the mapping information in a mapping cache; accessing the memory device based on mapping information stored in the mapping cache or based on a physical address selectively provided with an access request from the host; storing mapping information corresponding to the write operation in a write mapping cache when the access request is for a write operation and storing mapping information corresponding to the read operation in a read mapping cache when the access request is for a read operation; and providing the mapping information output from the read mapping cache to the host.

According to still another embodiment of the present invention, a data processing system includes: a memory system adapted to store mapping information and to access user data based on the mapping information; and a host adapted to receive the mapping information from the memory system, store the received mapping information in a host memory, and provide an access request to the memory system based on the mapping information, wherein the memory system comprises: a mapping cache; and a controller adapted to store a portion of the mapping information, wherein the mapping cache comprises: a write mapping cache adapted to store mapping information corresponding to the write command and a read mapping cache adapted to store mapping information corresponding to the read command; and wherein the controller provides the mapping information output from the read mapping cache to the host.

Drawings

FIG. 1 illustrates a data processing system including a memory system according to one embodiment of the present disclosure.

FIG. 2 illustrates a data processing system according to one embodiment of the present disclosure.

Fig. 3 and 4 illustrate one example in which a host stores metadata in host memory according to one embodiment of the present disclosure.

FIG. 5 illustrates a first example of a transaction between a host and a memory system in a data processing system according to one embodiment of this disclosure.

FIG. 6 is a flowchart describing a first operation of the host and memory system according to one embodiment of the present disclosure.

FIG. 7 is a flow chart describing the operation of a memory system according to one embodiment of the present disclosure.

FIG. 8A illustrates the structure of a map cache according to an embodiment of the present disclosure.

FIG. 8B is a flowchart describing operations for processing mapping information using a mapping cache according to one embodiment of the present disclosure.

Fig. 9 to 14B illustrate mapping information processing operations according to an embodiment of the present disclosure.

FIG. 15 illustrates a second example of a transaction between a host and a memory system in a data processing system according to one embodiment of the present disclosure.

FIG. 16 illustrates a second operation of the host and memory system according to one embodiment of the present disclosure.

FIG. 17 illustrates a third operation of the host and memory system according to one embodiment of the present disclosure.

FIG. 18 illustrates a fourth operation of the host and memory system according to one embodiment of the present disclosure.

Detailed Description

Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. This invention may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Throughout this disclosure, like reference numerals refer to like parts throughout the various figures and embodiments of the present invention.

FIG. 1 illustrates a data processing system 100 according to one embodiment of the present disclosure.

Referring to FIG. 1, data processing system 100 may include a host 102 and a memory system 110.

The host 102 may include any of a variety of portable electronic devices, or any of a variety of non-portable electronic devices. Portable electronic devices may include mobile phones, MP3 players, laptop computers, and the like, and non-portable electronic devices may include desktop computers, game consoles, Televisions (TVs), projectors, and the like.

Host 102 may include at least one Operating System (OS) that may manage and control the overall functions and operations of host 102 and provide operations between host 102 and a user using data processing system 100 or memory system 110. The OS can support functions and operations corresponding to the purpose of use and use by the user. For example, the OS may be divided into a general-purpose OS and a mobile OS according to the mobility of the host 102. The general-purpose OS may be divided into a personal OS and an enterprise OS according to the user's environment.

The memory system 110 may operate to store data for the host 102 in response to requests by the host 102. The memory system 110 may include any of a Solid State Drive (SSD), a multimedia card (MMC), a Secure Digital (SD) card, a Universal Storage Bus (USB) device, a Universal Flash Storage (UFS) device, a Compact Flash (CF) card, a Smart Media Card (SMC), a Personal Computer Memory Card International Association (PCMCIA) card, a memory stick, and the like. The MMC may include an embedded MMC (emmc), a reduced-size MMC (RS-MMC), a micro MMC, and the like. The SD card may include a mini SD card, a micro SD card, and the like.

The memory system 110 may be implemented with various types of storage devices. Such storage devices may include, but are not limited to: volatile memory devices such as Dynamic Random Access Memory (DRAM) and static ram (sram); and non-volatile memory devices such as Read Only Memory (ROM), mask ROM (mrom), programmable ROM (prom), erasable programmable ROM (eprom), electrically erasable programmable ROM (eeprom), ferroelectric RAM (fram), phase change RAM (pram), magnetoresistive RAM (mram), resistive RAM (RRAM or ReRAM), flash memory, and the like. The flash memory may have a 3-dimensional (3D) stack structure.

Memory system 110 may include a controller 130 and a memory device 150.

The controller 130 and the memory device 150 may be integrated into a single semiconductor device. For example, the controller 130 and the memory device 150 may be integrated into one semiconductor device to constitute a Solid State Drive (SSD). When the memory system 110 is used as an SSD, the operation speed of the host 102 connected to the memory system 110 can be improved. In addition, the controller 130 and the memory device 150 may be integrated into one semiconductor device to constitute a memory card. For example, the controller 130 and the memory device 150 may constitute a memory card such as: personal Computer Memory Card International Association (PCMCIA) card, Compact Flash (CF) card, Smart Media (SM) card, memory stick, multimedia card (MMC), such as reduced-size MMC (RS-MMC) or micro-MMC, Secure Digital (SD) card including mini SD card, micro SD card or SDHC card, or Universal Flash Storage (UFS) device.

The memory device 150 may be a non-volatile memory device that can retain data stored therein even when power is not supplied. The memory device 150 may store data provided by the host 102 in a write operation and provide data stored therein to the host 102 in a read operation. The memory device 150 may include a plurality of memory blocks, each of the plurality of memory blocks may include a plurality of pages, and each of the pages may include a plurality of memory cells coupled to a word line. In one embodiment, memory device 150 may be a flash memory. The flash memory may have a 3-dimensional (3D) stack structure.

The controller 130 may control the memory device 150 in response to a request from the host 102. For example, the controller 130 may provide data read from the memory device 150 to the host 102 and store the data provided by the host 102 in the memory device 150. For this operation, the controller 130 may control read, program (or write), and erase operations of the memory device 150.

Controller 130 may include a host interface (I/F)132, a processor 134, a memory I/F142, and a memory 144.

The host I/F132 may be configured to process commands and data for the host 102 and may communicate with the host 102 using one or more of a variety of interface protocols, such as: universal Serial Bus (USB), multi-media card (MMC), peripheral component interconnect express (PCI-e or PCIe), Small Computer System Interface (SCSI), serial attached SCSI (sas), Serial Advanced Technology Attachment (SATA), Parallel Advanced Technology Attachment (PATA), Enhanced Small Disk Interface (ESDI), Integrated Drive Electronics (IDE), and the like.

The host I/F132 may be driven by firmware called a Host Interface Layer (HIL) to exchange data with the host 102.

The memory I/F142 may serve as a memory/storage interface for interfacing the controller 130 and the memory device 150 such that the controller 130 controls the memory device 150 in response to requests from the host 102. When memory device 150 is a flash memory (e.g., a NAND flash memory), under the control of processor 134, memory I/F142 may generate control signals for memory device 150 and process data to be provided to memory device 150. The memory I/F142 may operate as an interface (e.g., a NAND flash interface) for processing commands and data between the controller 130 and the memory device 150. In particular, memory I/F142 may support data transfers between controller 130 and memory device 150.

The memory I/F142 may be driven by firmware called a Flash Interface Layer (FIL) to exchange data with the memory device 150.

Processor 134 may control the overall operation of memory system 110. Processor 134 may drive firmware to control the overall operation of memory system 110. The firmware may be referred to as a Flash Translation Layer (FTL). Also, the processor 134 may be implemented using a microprocessor or a Central Processing Unit (CPU).

Also, the controller 130 may perform background operations on the memory device 150 by using the processor 134. For example, background operations performed on the memory device 150 may include Garbage Collection (GC) operations, Wear Leveling (WL) operations, map refresh operations, or bad block management operations.

The memory 144 may serve as a working memory for the memory system 110 and the controller 130, and store data for driving the memory system 110 and the controller 130.

The memory 144 may be embodied by volatile memory. For example, the memory 144 may be embodied by Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM). The memory 144 may be provided within or external to the controller 130. Fig. 1 illustrates the memory 144 disposed within the controller 130. In another embodiment, the memory 144 may be embodied by an external volatile memory having a memory interface that transfers data between the memory 144 and the controller 130.

As described above, the memory 144 may store data required to perform a data write/read operation between the host 102 and the memory device 150, and data when the data write/read operation is performed. To store such data, memory 144 may include a program memory, a data memory, a write buffer/cache, a read buffer/cache, a data buffer/cache, a map buffer/cache, and so forth.

Fig. 1 shows memory 144 including a mapping cache 146. The mapping cache 146 may store mapping information. The mapping information may be used to map logical addresses to physical addresses. The mapping information will be described in detail with reference to fig. 3. Mapping cache 146 may store mapping information under the control of processor 134. Since the mapping cache 146 has limited storage space, it may store some of the mapping information stored in the memory device 150. For example, the map cache 146 may store map information for newly processed data. In another example, the map cache 146 may store map information for frequently processed data. The map cache 146 may store map information according to a first-in-first-out (FIFO) scheme.

Although not shown in the drawings, the controller 130 may further include an Error Correction Code (ECC) unit and a Power Management Unit (PMU).

The ECC unit may process data read from the memory device 150 or data to be programmed in the memory device 150 to detect and correct a faulty bit of the data read from the memory device 150, and may include an ECC encoder and an ECC decoder.

The ECC encoder may perform an ECC encoding operation on data to be programmed in the memory device 150 to generate parity bits to be added to the data. The data and parity bits may be stored in memory device 150. When reading data stored in the memory device 150, the ECC decoder may detect and correct a fail bit included in the data read from the memory device 150.

The ECC unit may perform error correction by means of coded modulation using one or more of the following: low Density Parity Check (LDPC) codes, Bose-Chaudhri-hocquenghem (bch) codes, turbo codes, Reed-Solomon codes, convolutional codes, Recursive Systematic Codes (RSC), Trellis Coded Modulation (TCM), Block Coded Modulation (BCM), and the like. However, the ECC unit is not limited to any particular structure. An ECC unit may include all circuits, modules, systems, or devices used for error correction.

The PMU may provide and manage power for the controller 130.

FIG. 2 illustrates data processing system 100 of FIG. 1 according to one embodiment of the present disclosure.

In particular, the controller 130 in the memory system 110 of FIG. 1 is described in more detail in FIG. 2.

Referring to fig. 2, the controller 130 may include a host interface unit 132, a Flash Translation Layer (FTL) unit 40, a memory interface unit 142, and a memory 144. The FTL unit 40 may be implemented in the processor 134 of fig. 1.

Although not illustrated in fig. 2, the ECC unit described in fig. 1 may be included in a Flash Translation Layer (FTL) unit 40 according to one embodiment of the present disclosure. According to another embodiment of the present disclosure, the ECC unit may be implemented as a separate module, circuit, or firmware in the controller 130.

The host interface unit 132 may exchange commands and data with the host 102. For example, the host interface unit 132 may include: a command queue 56 for sequentially storing commands, data, and the like transmitted from the host 102 and then outputting them in the order in which they were stored; a buffer manager 52 capable of classifying commands, data, and the like transferred from the command queue 56, or adjusting the processing order of commands, data, and the like; and an event queue 54 for sequentially transmitting events to the FTL unit 40 to process commands, data, etc. transmitted from the buffer manager 52.

The command and data transmitted from the host 102 may be a plurality of commands and data having the same characteristics and being continuously transmitted, or may be a plurality of commands and data having different characteristics and being transmitted in a mixed order. For example, a plurality of commands for reading data may be transmitted, or a read command and a program command may be alternately transmitted.

The host interface unit 132 may first sequentially store commands, data, and the like transmitted from the host 102 in the command queue 56, and then the host interface unit 132 may predict what operation the controller 130 will perform based on characteristics of the commands, data, and the like transmitted from the host 102, and may determine a processing order or priority of the commands, data, and the like based on the prediction.

Also, depending on the characteristics of the commands, data, etc., transmitted from the host 102, the buffer manager 52 in the host interface unit 132 may determine whether the commands, data, etc., are to be stored in the memory 144 or whether the commands, data, etc., are to be transmitted to the Flash Translation Layer (FTL) unit 40. The event queue 54 may receive events from the buffer manager 52 that need to be internally executed and processed by the memory system or controller 130 according to commands, data, etc., and then transmit the events to the FTL unit 40 in the order received.

According to one embodiment of the present disclosure, the FTL unit 40 may include: a Host Request Manager (HRM)46 for managing events received from the event queue 54; a Mapping Manager (MM)44 for managing mapping information; a state manager 42 for performing garbage collection operations or wear leveling operations; and a block manager 48 for executing commands on blocks in the memory device 150.

For example, the host request manager 46 may process read commands and program commands received from the host interface unit 132 and requests according to events by using the mapping manager 44 and the block manager 48. Host request manager 46 may send a query request to mapping manager 44 to detect a physical address corresponding to the requested logical address and process the read command by transmitting a flash read command to memory interface unit 142 for the physical address. Meanwhile, the host request manager 46 may program data in a particular page of an empty block of the memory device 150 by transmitting a program request to the block manager 48, and then update mapping information for logical-to-physical address mapping by transmitting a mapping update request for the program request to the mapping manager 44.

Herein, block manager 48 may convert programming requests requested by host request manager 46, mapping manager 44, and status manager 42 into programming requests for memory device 150 to manage blocks in memory device 150. To maximize programming or write performance of memory system 110 of FIG. 1, block manager 48 may collect programming requests and transmit flash programming requests for multi-plane and one-time programming operations to memory interface unit 142. Also, various outstanding flash programming requests may be transmitted to the memory interface unit 142 to maximize parallel processing by multi-channel and multi-directional flash controllers.

Meanwhile, the block manager 48 may manage the flash blocks according to the number of valid pages, select and delete blocks that do not include valid pages when free blocks are needed, and select blocks that include the least number of valid pages when a garbage collection operation is needed. In order for the block manager 48 to obtain a sufficient number of free blocks, the state manager 42 may perform a garbage collection operation on the victim block by collecting valid data from the victim block, moving the collected valid data into an empty block, and erasing the data stored in the victim block.

When the block manager 48 provides information about the victim block to the state manager 42, the state manager 42 may first check all pages of the victim block to determine whether each of the pages is valid. For example, the validity of each page may be determined when the state manager 42 identifies a logical address programmed in a spare (which is out-of-band (OOB)) area of each page, and then compares the actual address of the page to the actual address mapped to the logical address obtained from the lookup request of the mapping manager 44. The state manager 42 may transmit a program request to the block manager 48 for each valid page. When the programming operation is complete, mapping manager 44 may update the mapping table.

The mapping manager 44 may manage the mapping table and process requests (such as query requests, update requests, etc.) generated by the host request manager 46 and the state manager 42. Mapping manager 44 may store the entire mapping table in flash memory and cache the mapping entries according to the capacity of memory 144. When a map cache miss occurs while processing query and update requests, the mapping manager 44 may transmit a read command to the memory interface unit 142 to load a mapping table stored in the memory device 150. When the number of dirty cache blocks of mapping manager 44 exceeds a predetermined threshold, mapping manager 44 may transmit a programming request to block manager 48 to create a clean cache block and store a dirty mapping table in memory device 150.

Meanwhile, when performing garbage collection operations, the host request manager 46 may program the latest version of data for the same logical address of the page and issue update requests concurrently when the state manager 42 copies a valid page. When state manager 42 requests a mapping update before copying of the valid page is not completed normally, mapping manager 44 may not update the mapping table. Mapping manager 44 may perform a mapping update to ensure accuracy only when the most recent mapping table still represents the previous real address.

The memory device 150 may include a plurality of memory blocks, which may include Single Level Cell (SLC) memory blocks that store one bit of data and/or multi-level cell (MLC) memory blocks that store multiple bits of data. An SLC memory block may include multiple pages implemented by memory cells that store one bit of data in one memory cell. SLC memory blocks may have high data processing speed and high endurance. On the other hand, an MLC memory block may include multiple pages implemented by memory cells that store multiple bits of data (e.g., two or more bits of data) in one memory cell. MLC memory blocks may have a larger data storage space than SLC memory blocks. In other words, MLC memory blocks may be highly integrated.

Memory device 150 may include not only MLC memory blocks, where each MLC memory block includes multiple pages implemented by memory cells capable of storing two bits of data in one memory cell, but may also include triple-level cell (TLC) memory blocks, where each TLC memory block includes multiple pages implemented by memory cells capable of storing three bits of data in one memory cell, four-level cell (QLC) memory blocks, where each QLC memory block includes multiple pages implemented by memory cells capable of storing four bits of data in one memory cell, and/or a plurality of hierarchical unit memory blocks (wherein each of the plurality of hierarchical unit memory blocks includes a plurality of pages implemented by memory cells capable of storing five or more bits of data in one memory cell).

According to one embodiment of the present disclosure, the controller 130 included in the memory system 110 may check the states of a plurality of channels (or paths), particularly, check the channel (or path) states between a plurality of memory dies included in the memory device 150 and the controller 130; or a controller of any of the plurality of memory systems (e.g., a master memory system) may check the status of a plurality of channels (or ways) for the plurality of memory systems, particularly between the master memory system and other memory systems (i.e., slave memory systems). In other words, it may be checked whether a plurality of channels (or ways) for a memory die of the memory device 150 or a plurality of channels (or ways) for a plurality of memory systems are in any of a busy state, a ready state, an active state, an idle state, a normal state, an abnormal state, or the like. Herein, according to an embodiment of the present disclosure, a channel (or path) having a ready state or an idle state in a normal state may be determined as an optimal channel (or path). Specifically, in the embodiment of the present invention, among the plurality of channels (or passages), a channel (or passage) having an available capacity in a normal range or a channel (or passage) having an operation level in a normal range may be determined as an optimal channel. Herein, the operation level of the channel (or path) may be determined based on an operation clock, a power level, a current/voltage level, an operation timing, a temperature level, and the like of the channel (or path).

Further, according to an embodiment of the present disclosure, as an example, it is assumed that write data corresponding to a plurality of write commands received from the host 102 is stored in a buffer/cache included in the memory 144 of the controller 130, and then in a program operation, the data stored in the buffer/cache is programmed and stored in a plurality of memory blocks included in the memory device 150. At this time, the mapping information is updated according to a program operation for programming data into the memory device 150, and the updated mapping information is stored in a plurality of memory blocks included in the memory device 150. In short, as one example, a case where a program operation is performed in response to a plurality of write commands received from the host 102 is described.

According to one embodiment of the present disclosure, the following is provided: the mapping information of data corresponding to a plurality of read commands for data stored in the memory device 150 from the host 102 is checked, the data corresponding to the read commands is read from the memory device 150, the read data is stored in a buffer/cache included in the memory 144 of the controller 130, and when the data stored in the buffer/cache is provided from the host 102, a read operation corresponding to the read command received from the host 102 is performed.

Further, according to an embodiment of the present disclosure, the following is provided: receiving a plurality of erase commands for memory blocks included in the memory device 150 from the host 102, checking the memory blocks corresponding to the erase commands, erasing data stored in the checked memory blocks, updating mapping information corresponding to the erased data, and then storing the updated mapping information in the memory blocks included in the memory device 150. In short, as an example, a case where an erase operation corresponding to a plurality of erase commands received from the host 102 is performed is considered and described.

Further, according to an embodiment of the present disclosure, the following is provided: as described above, a plurality of write commands, a plurality of read commands, and a plurality of erase commands are received from the host 102, and a plurality of program operations, a plurality of read operations, and a plurality of erase operations are performed based on the received commands.

Also, for convenience of description, a case where the controller 130 may perform a command operation in the memory system 110 according to an embodiment of the present disclosure is described as an example. However, as described above, the processor 134 included in the controller 130 may perform a command operation using the FTL. For example, according to an embodiment of the present disclosure, the controller 130 may program and store user data and metadata corresponding to a write command received from the host 102 in any of a plurality of memory blocks included in the memory device 150, read user data and metadata corresponding to a read command received from the host 102 from any of a plurality of memory blocks included in the memory device 150, and provide the read user data and metadata to the host 102, or erase user data and metadata corresponding to an erase command received from the host 102 from any of a plurality of memory blocks included in the memory device 150.

Herein, the metadata may include logical-to-physical (L2P) information and physical-to-logical (P2L) information about data stored in the memory block in a programming operation. The metadata may also include information about command data corresponding to a command received from the host 102, information about a command operation corresponding to the command, information about a memory block of the memory device 150 on which the command operation is performed, and mapping information corresponding to the command operation. In other words, the metadata may include all other information and data corresponding to commands received from the host 102 except for user data.

When the controller 130 performs a plurality of command operations corresponding to a plurality of commands received from the host 102 (e.g., when the controller 130 receives a plurality of write commands from the host 102), the controller 130 may perform a program operation corresponding to the write commands. In this case, the user data corresponding to the write command may be written and stored in a memory block of the memory device 150, for example, in an empty memory block, an open memory block, or a free memory block in which an erase operation has been performed among memory blocks of the memory device 150. Mapping information between logical addresses and physical addresses of user data stored in the memory blocks (i.e., an L2P mapping table storing logical information), and mapping information between physical addresses and logical addresses of memory blocks for storing user data (i.e., a P2L mapping table storing physical information) are written and stored in empty memory blocks, open memory blocks, or free memory blocks among the memory blocks of the memory device 150.

Herein, when the controller 130 receives a write command from the host 102, the controller 130 may write and store user data corresponding to the write command in a memory block, and store metadata in the memory block, the metadata including mapping information on the user data stored in the memory block. Specifically, the controller 130 may generate and update the meta zone of the metadata (i.e., the L2P zone and the P2L zone in the mapping zone of the mapping information), and then store them in the memory block of the memory device 150. Herein, the mapping section stored in the memory block of the memory device 150 may be loaded into the memory 144 included in the controller 130 to update the mapping section.

When receiving a plurality of write commands from the host 102, the states of a plurality of channels (or lanes) for the memory device 150 may be checked, and particularly, the states of channels (or lanes) coupled to a plurality of memory dies included in the memory device 150 may be checked, and then an optimal transmission channel (or transmission lane) and an optimal reception channel (or reception lane) corresponding to the states of the channels (or lanes) may be independently determined. According to an embodiment of the present disclosure, user data and metadata corresponding to a write command may be transmitted to a corresponding memory die of the memory device 150 by means of an optimal transmission channel (or transmission path) and stored by performing a programming operation, and a result of the programming operation performed on the corresponding memory die of the memory device 150 may be received from the corresponding memory die of the memory device 150 by means of an optimal reception channel (or reception path) and provided to the host 102.

Further, when the controller 130 receives a plurality of read commands from the host 102, the controller 130 may read data corresponding to the read commands from the memory device 150, store the read data in a buffer/cache included in the memory 144 of the controller 130, and provide the data stored in the buffer/cache to the host 102.

When the controller 130 receives a plurality of read commands from the host 102, the state of a channel (or a pass) for the memory device 150 may be checked, and particularly, the state of a channel (or a pass) coupled with a memory die included in the memory device 150 may be checked, and then an optimal transmission channel (or a transmission pass) and an optimal reception channel (or a reception pass) corresponding to the state of the channel (or the pass) may be independently determined. According to an embodiment of the present disclosure, a read request for user data and metadata corresponding to a read command may be transmitted to a corresponding memory die of the memory device 150 by way of an optimal transmission channel (or transmission path) to perform a read operation, and a result of the read operation performed on the corresponding memory die of the memory device 150 (i.e., the user data and metadata corresponding to the read command) may be received from the corresponding memory die of the memory device 150 by way of an optimal reception channel (or reception path) and provided to the host 102.

In addition, when the controller 130 receives a plurality of erase commands from the host 102, the controller 130 may detect a memory block of the memory device 150 corresponding to the erase command and then perform an erase operation on the detected memory block.

According to an embodiment of the present disclosure, when a plurality of erase commands are received from the host 102, a state of a channel (or a pass) for the memory device 150 may be checked, particularly, a state of a channel (or a pass) coupled to a memory die included in the memory device 150 may be checked, and then an optimal transmission channel (or a transmission pass) and an optimal reception channel (or a reception pass) corresponding to the state of the channel (or the pass) may be independently determined. According to an embodiment of the present disclosure, an erase request for a memory block corresponding to an erase command among memory dies of the memory device 150 may be transmitted to a corresponding memory die of the memory device 150 by means of an optimal transmission channel (or transmission path) to perform an erase operation, and a result of the erase operation performed on the corresponding memory die of the memory device 150 may be further received from the corresponding memory die of the memory device 150 by means of an optimal reception channel (or reception path) and provided to the host 102.

In the memory system 110, when multiple commands (e.g., multiple write commands, multiple read commands, and multiple erase commands) are received from the host 102, particularly, when a plurality of commands are received sequentially and simultaneously as described above, the state of a channel (or path) for the memory device 150 may be checked, and then an optimal transmission channel (or transmission path) and an optimal reception channel (or reception path) corresponding to the state of the channel (or path) can be independently determined, and may request the memory device 150 to perform a command operation corresponding to a plurality of commands by means of an optimal transmission channel (or transmission path), in particular, memory device 150 may be requested to perform corresponding command operations in a plurality of memory dies included in memory device 150, and may receive the operation results of the command operation from the memory dies of the memory device 150 by means of the optimal receive channel (or receive path). According to embodiments of the present disclosure, the memory system 110 may provide the host 102 with a response to a command received from the host 102 by matching the command transmitted via the optimal transmit channel (or transmit path) with the result of an operation received via the optimal receive channel (or receive path).

The controller 130 may check the states of a plurality of channels (or paths) for the memory device 150, particularly the channels (or paths) between a plurality of memory dies included in the memory device 150 and the controller 130, and then the controller 130 may independently determine an optimal transmission channel (or transmission path) and an optimal reception channel (or reception path) for the memory device 150. The controller 130 may also check the status of a plurality of channels (or lanes) for a plurality of memory systems, particularly the channel (or lane) status between a master memory system and other memory systems, for example, the channel (or lane) status between a master memory system and a slave memory system, and then the controller 130 may independently determine the optimal transmit channel (or transmit lane) and the optimal receive channel (or receive lane) for the memory systems. In other words, according to an embodiment of the present disclosure, the controller 130 may check whether a channel (or a way) of a memory die for the memory device 150 or a state of a channel (or a way) for the memory system is a busy state, a ready state, an active state, an idle state, a normal state, or an abnormal state. For example, the controller 130 may determine a channel (or path) of a ready state or an idle state in a normal state as an optimal channel (or path). Specifically, according to an embodiment of the present disclosure, among channels (or passages), a channel (or passage) whose available capacity is in a normal range and whose operation level is in the normal range may be determined as an optimal channel. Herein, the operation level of the channel (or path) may be determined based on an operation clock, a power level, a current/voltage level, an operation timing, a temperature level, and the like of the channel (or path). In addition, according to an embodiment of the present disclosure, a main memory system among a plurality of memory systems may be determined based on information of each memory system (e.g., capability of command operation of each memory system, that is, capability of command operation in the controller 130 and the memory device 150 included in each memory system). Capabilities may include performance capabilities, processing power, processing speed, and processing latency of command operations. Herein, the main memory system may be determined by means of contention between memory systems. For example, the main memory system may be determined by contention based on a coupling ranking between the host 102 and the memory system.

To store or read data requested by the host 102 in the memory device 150, the memory system 110 may map a file system used by the host 102 to storage space of the memory device 150. For example, an address corresponding to data according to a file system used by the host 102 may be referred to as a "logical address" or a "logical block address", and an address corresponding to data in a storage space of the memory device 150 may be referred to as a "physical address" or a "physical block address".

When the host 102 transmits a logical address to the memory system 110 using a read command, the memory system 110 may search for a physical address corresponding to the logical address and then output data stored in a memory space corresponding to the searched physical address. During this operation, mapping may be performed while the memory system 110 searches for a physical address corresponding to the logical address transmitted from the host 102.

When the host 102 knows the mapped data (hereinafter referred to as "mapping information") in advance, the time required for the memory system 110 to output data corresponding to a read command transmitted by the host 102 can be reduced.

Fig. 3 and 4 illustrate one example in which a host stores metadata in host memory according to one embodiment of the present disclosure. Referring to fig. 3 and 4, an example will be described in which the host 102 stores metadata in the host memory 106.

Referring to fig. 3, the host 102 may include a host processor 104, a host memory 106, and a host controller interface 108. Memory system 110 may include a controller 130 and a memory device 150. Controller 130 may include a host interface 132, a logic block 160, a memory interface 142, and a memory 144. The controller 130 and the memory device 150 described with reference to fig. 3 may correspond to the controller 130 and the memory device 150 described with reference to fig. 1 and 2.

Hereinafter, a description will be provided based on a technical difference between the controller 130 and the memory device 150 shown in fig. 3 and the controller 130 and the memory device 150 described with reference to fig. 1 and 2. In particular, the logic block 160 in the controller 130 of fig. 3 may correspond to the Flash Translation Layer (FTL) unit 40 described above with reference to fig. 2. However, according to some embodiments, the logic block 160 in the controller 130 may further perform roles and functions that are not performed in the Flash Translation Layer (FTL) unit 40.

In FIG. 3, host processor 104 may have higher performance and host memory 106 may have greater capacity than memory system 110. Unlike memory system 110, host processor 104 and host memory 106 may have advantages in that: they have fewer space constraints and the hardware of host processor 104 and host memory 106 can be upgraded. Thus, the memory system 110 may utilize the resources of the host 102 to improve operating efficiency.

As the amount of data that the memory system 110 may store increases, the amount of metadata corresponding to the data stored in the memory system 110 may also increase. Since the memory 144 in the memory system 110 into which the controller 130 can load the metadata is limited in space, an increase in the amount of metadata may be a burden on the operation of the controller 130. For example, due to space constraints in memory 144, controller 130 may load some, but not all, of the metadata. When the metadata accessed by the host 102 is not included in the partially loaded metadata and some of the loaded metadata is updated, the controller 130 may have to read the metadata accessed by the host 102 from the memory device 150 and store the updated loaded metadata in the memory device 150. These operations may be necessary for the controller 130 to perform read or write operations required by the host 102 and may degrade the operational performance of the memory system 110.

The storage space of the host memory 106 included in the host 102 may be tens to thousands of times larger than the storage space of the memory 144 that may be used by the controller 130. Thus, the memory system 110 may transmit the metadata 166 used by the controller 130 to the host memory 106 so that the host memory 106 may be used as cache memory for the address translation process performed by the memory system 110. In this case, the host 102 may not utilize commands to transmit logical addresses to the memory system 110, and may translate the logical addresses to physical addresses based on the metadata 166 stored in the host memory 106 and then transmit the physical addresses to the memory system 110 along with the commands. Thus, in this case, the memory system 110 may omit the mapping process for translating the logical addresses to the physical addresses and access the memory device 150 based on the physical addresses transmitted from the host 102. As a result, the operational burden occurring when the controller 130 uses the memory 144 can be reduced, and thus the operational efficiency of the memory system 110 can be significantly improved.

At the same time, even if the memory system 110 transmits the metadata 166 to the host 102, the memory system 110 may manage (e.g., update, erase, generate, etc.) the source of the reference that becomes the metadata 166. Since the controller 130 in the memory system 110 can perform background operations such as a garbage collection operation and a wear leveling operation according to the operation state of the memory device 150 and determine a physical location (physical address) in the memory device 150, the physical address of data in the memory device 150 can be changed under the control of the controller 130. Thus, the memory system 110 may be responsible for managing the sources that become references to the metadata 166.

In other words, when it is determined that the memory system 110 needs to correct or update the metadata 166 transmitted to the host 102 in the process of managing the metadata 166, the memory system 110 may request the host 102 to update the metadata 166. In response to a request by the memory system 110, the host 102 may update the metadata 166 stored in the host memory 106. In this manner, the metadata 166 stored in the host memory 106 may be kept up-to-date. Therefore, even if the host controller interface 108 performs address mapping using the metadata 166 stored in the host memory 106, no operational problem is caused.

Meanwhile, the metadata 166 stored in the host memory 106 may include mapping information for detecting a physical address corresponding to a logical address. Referring to fig. 2, saying that metadata in which a logical address and a physical address match each other may include: mapping information for detecting a physical address corresponding to the logical address, and mapping information for detecting a logical address corresponding to the physical address. The mapping information for detecting the logical address corresponding to the physical address may be mainly used for an internal operation of the memory system 110, and thus may not be used when the host 102 stores data in the memory system 110 or reads data corresponding to a specific logical address from the memory system 110.

While managing (creating, erasing, updating, etc.) the mapping information, the controller 130 in the memory system 110 may store the mapping information in the memory device 150. Since the host memory 106 is a volatile memory, the metadata 166 stored in the host memory 106 may be lost when a power interruption occurs in the host 102 and the memory system 110. Thus, the controller 130 in the memory system 110 may not only keep the metadata 166 stored in the host memory 106 up to date, but also store the up to date mapping information in the memory device 150.

Referring to fig. 3 and 4, when the metadata 166 is stored in the host memory 106, the operation of the host 102 for reading data from the memory system 110 will be described.

Power may be supplied to the host 102 and the memory system 110, and the host 102 and the memory system 110 may interlock. When the host 102 and the memory system 110 are interlocked, the metadata L2P MAP stored in the memory device 150 may be transferred to the host memory 106.

When host processor 104 generates a read command, the read command may be transmitted to host controller interface 108. After receiving the read command, the host controller interface 108 may transmit a logical address corresponding to the read command to the host memory 106. Based on the metadata L2P MAP stored in host memory 106, host controller interface 108 may detect a physical address corresponding to a logical address.

The host controller interface 108 may transmit the Read command Read CMD to the controller 130 in the memory system 110 along with the physical address. The controller 130 may access the memory device 150 based on the received Read command Read CMD and the physical address. Data stored in the memory device 150 at a location corresponding to the physical address may be transmitted to the host 102.

The controller 130 according to an embodiment of the present disclosure may omit a process of receiving a logical address from the host 102 and searching for a physical address corresponding to the logical address. In particular, an operation of reading metadata by accessing the memory device 150 may be omitted in the process of the controller 130 searching for a physical address. In this way, the process of the host 102 reading data stored in the memory system 110 can be made faster.

FIG. 5 illustrates a first example of a transaction between a host 102 and a memory system 110 in a data processing system according to one embodiment of this disclosure.

Referring to fig. 5, the host 102 storing the MAP information MAP INFO may transmit a read command to the memory system 110, the read command including a logical address LBA and a physical address PBA. When there is information in the host memory 106 about the physical address PBA corresponding to the logical address LBA, the host 102 may transmit a read command including the logical address LBA and the physical address PBA to the memory system 110. However, when there is no information about the physical address PBA corresponding to the logical address LBA in the host memory 106, the host 102 may transmit a read command including only the logical address LBA to the memory system 110.

Although FIG. 5 depicts a read command as an example, the concepts and spirit of the present invention may be applied to a write command or an erase command that may be transmitted by the host 102 to the memory system 110.

FIG. 6 is a flowchart describing a first operation of the host 102 and the memory system 110 according to an embodiment of the present disclosure. In particular, FIG. 6 describes certain operations between the host 102 and the memory system 110. The host 102 transmits a COMMAND comprising a logical address LBA and a physical address PBA, and the memory system 110 receives the COMMAND.

At S612, at the request of the user, the host 102 may generate a COMMAND including the logical address LBA.

At S614, the host 102 may check whether a physical address PBA corresponding to the logical address LBA exists in the mapping information stored in the host memory 106.

When the physical address PBA corresponding to the logical address LBA does not exist in the mapping information (no at S614), then at S618 the host 102 may issue a COMMAND including only the logical address LBA to the memory system 110.

In contrast, when there is a physical address PBA corresponding to the logical address LBA in the mapping information (yes at S616), the host 102 may add the physical address PBA to the COMMAND including the logical address LBA at S616.

Then, at S618, the host 102 may provide the COMMAND including the logical address LBA and the physical address PBA to the memory system 110.

At S622, the memory system 110 may receive the COMMAND provided from the host 102.

At S624, the memory system 110 may check whether the received COMMAND includes a physical address PBA.

When the received COMMAND does not include the physical address PBA ("no" at S624), then at S632 the memory system 110 may search the memory 144 or the memory device 150 for the physical address PBA corresponding to the logical address LBA included in the received COMMAND. The physical address search operation of the memory system 110 may be described in detail with reference to fig. 7.

Conversely, when the received COMMAND includes the physical address PBA ("yes" at S624), then at S626 the memory system 110 may check whether the physical address PBA is valid.

The memory system 110 may transmit the mapping information to the host 102, and based on the mapping information transmitted by the memory system 110, the host 102 may include the physical address PBA in the COMMAND. However, after the memory system 110 transmits the mapping information to the host 102, the mapping information managed by the memory system 110 may be changed and updated. Thus, when the mapping information is dirty, the physical address PBA transferred to the memory system 110 by the host 102 may not be used as it is. Thus, the memory system 110 may be able to determine whether the physical address PBA included in the received COMMAND is valid. For example, the memory system 110 may manage the dirty mapping information separately. As another example, the memory system 110 may compare the physical address PBA provided from the host 102 to physical addresses stored in the memory system 110 to determine the validity of the physical address PBA. However, this is merely an example, and the concept and spirit of the present invention may not be limited thereto.

When the physical address PBA included in the received COMMAND is valid (yes at S626), the memory system 110 may perform an operation corresponding to the COMMAND by using the physical address PBA at S630.

In contrast, when the physical address PBA included in the received COMMAND is invalid (no at S626), the memory system 110 may discard the physical address PBA included in the received COMMAND at S628.

At S632, the memory system 110 may search for a physical address corresponding to the logical address LBA included in the received COMMAND.

FIG. 7 is a flow chart describing the operation of a memory system according to one embodiment of the present disclosure. In particular, fig. 7 specifically illustrates operation S632 shown in fig. 6.

Referring back to fig. 6, when the COMMAND provided from the host 102 does not include the physical address PBA, or when the COMMAND is made to include the physical address PBA but the physical address PBA is invalidated, the memory system 110 may search for a physical address corresponding to the logical address LBA included in the COMMAND at S632.

First, at S701, the memory system 110 may determine whether a physical address is hit in the mapping cache 146 of the memory 144 shown in fig. 1. In other words, the memory system 110 may check whether information about the physical address is stored in the mapping cache 146.

When the physical address is missed in the mapping cache 146 (no at S701), the memory system 110 may detect the physical address in the memory device 150 at S703. Specifically, the memory system 110 may search the memory device 150 for a physical address corresponding to a logical address LBA included in a COMMAND provided from the host 102. Thereafter, the memory system 110 may store the searched physical address in the mapping cache 146.

Then, when the COMMAND is a read COMMAND, the memory system 110 may read data based on the physical address searched from the memory device 150 at S705.

On the other hand, when the physical address hits in the mapping cache 146 (yes at S701), the memory system 110 may read data based on the physical address stored in the mapping cache 146 at S705.

When the physical address is stored in the mapping cache 146, the memory system 110 may skip searching the memory device 150 for the physical address. As a result, the memory system 110 may be able to efficiently perform read operations according to read commands. When the mapping cache 146 stores a large amount of mapping information, the memory system 110 may efficiently perform read operations. However, the mapping cache 146 may have a limited storage capacity. Thus, the memory system 110 may have to selectively store mapping information in the mapping cache 146 to efficiently perform read operations. For example, the memory system 110 may store frequently used mapping information in the mapping cache 146.

The memory system 110 according to an embodiment of the present disclosure may include a map cache 146 having a structure capable of selectively storing map information.

FIG. 8A illustrates the structure of the map cache 146 shown in FIG. 1 according to one embodiment of the present disclosure.

Map cache 146 may include a write map cache 830 and a read map cache 850. Map cache 146 may store mapping information in write map cache 830 and read map cache 850. Fig. 8A illustrates write map cache 830 and read map cache 850 filled with mapping information. Mapping cache 146 may selectively store mapping information in both mapping caches 830 and 850 according to the order in which the mapping information is input thereto.

Write map cache 830 and read map cache 850 may have different sizes. The size of each of write map cache 830 and read map cache 850 may refer to the space in which the mapping information is stored. The larger the mapping cache, the more mapping information can be stored. Fig. 8A exemplarily illustrates a write map cache 830 having a first size and a read map cache 850 having a second size.

Mapping information of different nature may be stored in write map cache 830 and read map cache 850, respectively. For example, write mapping cache 830 may store mapping information corresponding to a write Command (CMD), while read mapping cache 850 may store mapping information corresponding to a read Command (CMD).

Write map cache 830 and read map cache 850 may store mapping information according to an LRU (least recently used) scheme. Figure 8A illustrates one example of implementing write map cache 830 and read map cache 850 based on an LRU scheme. The MRU (most recently used) END MRU _ END of write map cache 830 and read map cache 850 may indicate the location of the most recently accessed mapping information. LRU END LRU _ END, shown in write map cache 830 and read map cache 850, may indicate the location of the mapping information that was accessed the longest time ago.

When new mapping information that is not cached in the mapping cache 146 is accessed according to a write command or a read command and is thus newly stored in the first cache (which is the write mapping cache 830 or the read mapping cache 850), the new mapping information may be stored at the MRU END MRU _ END of the first cache. When the space of the first cache is insufficient to store the new mapping information, the mapping information located at the LRU END LRU _ END of the first cache may be output to the outside.

When certain mapping information stored in the first cache is accessed again according to the homogeneous command, the certain mapping information may be moved to the MRU END MRU _ END of the first cache.

When certain mapping information stored in the first cache is accessed again according to the heterogeneous command, the certain mapping information may be moved to the MRU END MRU _ END of the second cache according to the heterogeneous command. When the space of the second cache is insufficient to store the certain mapping information, the mapping information located at the LRU END LRU _ END of the second cache may be output to the outside. When some mapping information stored in the first cache is moved to the MRU END MRU _ END of the second cache, the amount of mapping information cached in the first cache may be reduced.

Referring to fig. 8A, the first to third mapping information M1 to M3 may be stored in the write mapping cache 830 in the order in which they are accessed. When new mapping information corresponding to a write command is input to mapping cache 146(MAP IN), first mapping information M1 stored at LRU END LRU _ END of write mapping cache 830 may be output to the outside of mapping cache 146(MAP OUT). The new mapping information may then be stored in write map cache 830.

The read mapping cache 850 may store the fourth to eighth mapping information M4 to M8 in the order in which they are accessed. Based on the same principle, when new mapping information corresponding to a read command is input to mapping cache 146(MAP IN), fourth mapping information M4 stored at LRU END LRU _ END of read mapping cache 850 may be output to the outside of mapping cache 146(MAP OUT). The new mapping information may then be stored in read mapping cache 850.

Meanwhile, mapping information located at the LRU END LRU _ END of write mapping cache 830 or read mapping cache 850 and then output may be accessed less often than mapping information remaining in mapping cache 146. However, the outputted mapping information is newly stored in the mapping cache 146, and thus the outputted mapping information may be accessed more frequently than mapping information that is not newly stored in the mapping cache 146.

When the mapping information is stored in the mapping cache 146, the host 102 may access the memory device 150 at the fastest speed. When the mapping information is stored in host memory 106 of host 102, host 102 may access memory device 150 at the next fastest speed. When the mapping information is stored in the memory device 150, the host 102 may access the memory device 150 at the slowest speed.

The outputted mapping information may be the mapping information outputted from write mapping cache 830 and the mapping information outputted from read mapping cache 850.

Mapping information output from write map cache 830 is more likely to be accessed again for a write operation than mapping information not newly stored in map cache 146. However, the mapping information may be changed whenever a write operation is performed. Therefore, when the memory system 110 transmits the mapping information output from the write mapping cache 830 to the host 102, the host 102 may have to update the mapping information whenever the mapping information is updated in the memory system 110 by a write operation. As a result, the host 102 may have a large burden. Thus, memory system 110 may not provide the mapping information output from write map cache 830 to host 102.

On the other hand, mapping information output from read mapping cache 850 is more likely to be accessed again for a read operation than mapping information not newly stored in mapping cache 146. The mapping information may not be changed whenever the read operation is performed. Therefore, even if the memory system 110 transmits the mapping information output from the read mapping cache 850 to the host 102, there is a high possibility that the host 102 does not update the mapping information. Thus, memory system 110 may provide the mapping information output from read mapping cache 850 to host 102. The host 102 may perform a read operation faster by providing a read command and a physical address based on the mapping information.

FIG. 8B is a flowchart describing operations for processing mapping information using a mapping cache according to one embodiment of the present disclosure. Fig. 8B shows only a process of storing the mapping information in the mapping cache.

At S801, the memory system 110 may receive a read command or a write command from the host 102. The host 102 may selectively provide mapping information corresponding to a read command or a write command to the memory system 110 along with the read command or the write command.

At S803, the memory system 110 may input mapping information corresponding to a read command or a write command. In particular, mapping cache 146 of memory 144 may receive mapping information under the control of processor 134. Hereinafter, the inputted mapping information may be referred to as "target mapping information" for convenience of explanation.

At S805, it is determined whether the host 102 provides a read command or a write command to the memory system 110. When the host 102 provides a read command to the memory system 110 (no at S805), the memory system 110 may check whether there is an empty space for storing target mapping information in the read mapping cache 850 at S807.

When there is no empty space in the read map cache 850 (no at S807), at S809, the map cache 146 may output to the outside old map information, which is the oldest information among the map information stored in the read map cache 850. The oldest mapping information may be provided to the host 102, and the processing of the outputted old mapping information will be described in detail later with reference to fig. 15 to 18.

Then, at S811, the mapping cache 146 may store the target mapping information in the read mapping cache 850. Map cache 146 may store the target mapping information in read map cache 850 according to an LRU scheme.

On the other hand, when there is an empty space in the read map cache 850 (yes at S807), at S811 the map cache 146 may store the target map information in the read map cache 850 without outputting the old map information. Map cache 146 may store the target mapping information in read map cache 850 according to an LRU scheme.

When the host 102 provides a write command to the memory system 110 ("yes" at S805), the memory system 110 may check whether there is an empty space for storing target mapping information in the write mapping cache 830 at S813.

When there is no empty space in the write map cache 830 (no at S813), at S815, the map cache 146 may output to the outside old map information, which is the oldest information among the map information stored in the write map cache 830. The oldest mapping information output may be deleted.

Then, at S817, map cache 146 may store the target map information in write map cache 830. Map cache 146 may store the target mapping information in write map cache 830 according to an LRU scheme.

On the other hand, when there is an empty space in the write map cache 830 (yes at S813), at S817, the map cache 146 may store the target map information in the write map cache 830 without outputting the old map information. Map cache 146 may store the target mapping information in write map cache 830 according to an LRU scheme.

Hereinafter, by using the structure of the map cache 146 shown in fig. 8A, a process of inputting and outputting map information will be described with reference to fig. 9 to 14B. As shown in fig. 8A, it is assumed that first to third mapping information M1 to M3 corresponding to a write command are stored in the write map cache 830 and fourth to eighth mapping information M4 to M8 corresponding to a read command are stored in the read map cache 850. It is also assumed that the map cache 146 stores recently accessed map information. However, this is at most a mere example, and the concept and spirit of the present invention are not limited thereto.

Fig. 9 illustrates a mapping information processing operation according to one embodiment of the present disclosure. Fig. 9 illustrates operations of the memory system 110 for storing new mapping information (e.g., ninth mapping information M9 corresponding to a read command) in the mapping cache 146.

Referring to fig. 9, ninth mapping information M9 may be input to the mapping cache 146. Mapping cache 146 may have to store ninth mapping information M9 in read mapping cache 850. However, the read mapping cache 850 may be full of the fourth to eighth mapping information M4 to M8. Accordingly, the map cache 146 may output the fourth mapping information M4 to the outside of the map cache 146, the fourth mapping information M4 being the oldest information stored in the read map cache 850. The mapping cache 146 may store the ninth mapping information M9 in the read mapping cache 850. According to the LRU scheme, the ninth mapping information M9 may be stored at the MRU END MRU _ END as the most recently accessed mapping information. When the fourth mapping information M4 is output, the fifth to eighth mapping information M5 to M8 may be shifted toward the LRU END LRU _ END, and thus the fifth mapping information M5, which becomes the oldest information of the fifth to ninth mapping information M5 to M9, may be stored at the LRU END LRU _ END.

The memory system 110 may provide the fourth mapping information M4 output from the mapping cache 146 to the host 102. Then, the host 102 may store the fourth mapping information M4 in the host memory 106. Alternatively, when the fourth mapping information M4 is already stored in the host memory 106, the host 102 may update the mapping information stored in the host memory 106 based on the fourth mapping information M4 received from the memory system 110. A detailed description thereof will be provided with reference to fig. 15 to 18.

Fig. 10 illustrates a mapping information processing operation according to one embodiment of the present disclosure. Fig. 10 illustrates an operation of the memory system 110 for storing new mapping information (e.g., ninth mapping information M9 corresponding to a write command) in the mapping cache 146.

Referring to fig. 10, ninth mapping information M9 may be input to the mapping cache 146. Mapping cache 146 may have to store ninth mapping information M9 in write mapping cache 830. However, the write map cache 830 may be full of the first to third mapping information M1 to M3. Therefore, the mapping cache 146 may output the first mapping information M1, which is the oldest information stored in the write mapping cache 830, to the outside of the mapping cache 146, and the first mapping information M1. Thereafter, the second and third mapping information M2 and M3 may be shifted toward the LRU END LRU _ END, and then mapping cache 146 may store ninth mapping information M9 in write mapping cache 830.

The memory system 110 may delete the first mapping information M1 output from the mapping cache 146.

Fig. 11A and 11B illustrate a mapping information processing procedure according to one embodiment of the present disclosure. Fig. 11A and 11B illustrate movement paths of the seventh mapping information M7 when the memory system 110 accesses the seventh mapping information M7 stored in the read mapping cache 850 in response to a read command.

Seventh mapping information M7 may be stored in read mapping cache 850 according to an LRU scheme as the most recently accessed mapping information. Referring to fig. 11B, the mapping cache 146 may move the seventh mapping information M7 stored in the read mapping cache 850 to the MRU END MRU _ END.

Fig. 11B illustrates a state in which the seventh mapping information M7 is moved to the MRU END MRU _ END. The eighth mapping information M8 previously stored at the MRU END MRU _ END may be the mapping data that is accessed the second most recently. Therefore, the storage location of the eighth mapping information M8 may be changed. As a result, the storage location of the seventh mapping information M7 may be changed to the previous storage location of the eighth mapping information M8 in the read mapping cache 850.

Fig. 12A and 12B illustrate a mapping information processing procedure according to one embodiment of the present disclosure. Fig. 12A and 12B illustrate movement paths of the second mapping information M2 when the memory system 110 accesses the second mapping information M2 stored in the write map cache 830 in response to a write command.

According to the LRU scheme, the second mapping information M2 may be stored in the write mapping cache 830 as the most recently accessed mapping information. Referring to fig. 12B, the mapping cache 146 may move the second mapping information M2 stored in the write mapping cache 830 to the MRU END MRU _ END. Fig. 12B illustrates a state in which the second mapping information M2 is moved to the MRU END MRU _ END. The third mapping information M3 previously stored at the MRU END MRU _ END may be the mapping data that is accessed the second most recently. Therefore, the storage location of the third mapping information M3 may be changed. As a result, the storage location of the second mapping information M2 may be changed to the previous storage location of the third mapping information M3 in the write map cache 830.

Fig. 13A and 13B illustrate a mapping information processing procedure according to one embodiment of the present disclosure. Fig. 13A and 13B illustrate movement paths of the second mapping information M2 when the memory system 110 accesses the second mapping information M2 stored in the write map cache 830 in response to a read command.

According to the LRU scheme, the second mapping information M2 may be stored in the read mapping cache 850 as the most recently accessed mapping information. Referring to fig. 13B, the mapping cache 146 may move the second mapping information M2 stored in the write mapping cache 830 to the MRU END MRU _ END of the read mapping cache 850. Referring to fig. 13B, the mapping cache 146 may remove the second mapping information M2 stored in the write mapping cache 830 and store the second mapping information M2 at the MRU END MRU _ END of the read mapping cache 850.

Fig. 13B illustrates a state in which the second mapping information M2 is moved to the MRU END MRU _ END of the read mapping cache 850. In FIG. 13B, a second size read map cache 850 may be full of mapping information. Accordingly, the map cache 146 may output the fourth mapping information M4 stored at the LRU END LRU _ END in the read map cache 850 to the outside of the map cache 146. Thereafter, the mapping cache 146 may store the second mapping information M2 at the MRU END MRU _ END of the read mapping cache 850. Further, by removing the second mapping information M2 from the write map cache 830, an empty space may be generated in the write map cache 830.

The memory system 110 may provide the fourth mapping information M4 output from the mapping cache 146 to the host 102. Then, the host 102 may store the fourth mapping information M4 in the host memory 106. Alternatively, when the fourth mapping information M4 is already stored in the host memory 106, the host 102 may update the mapping information stored in the host memory 106 based on the fourth mapping information M4 received from the memory system 110. A detailed description thereof will be provided later with reference to fig. 15 to 18.

Fig. 14A and 14B illustrate a mapping information processing procedure according to one embodiment of the present disclosure. Fig. 14A and 14B illustrate movement paths of the sixth mapping information M6 when the memory system 110 accesses the sixth mapping information M6 stored in the read mapping cache 850 in response to a write command.

According to the LRU scheme, sixth mapping information M6 may be stored in write mapping cache 830 as the most recently accessed mapping information. Referring to fig. 14B, the mapping cache 146 may move the sixth mapping information M6 stored in the read mapping cache 850 to the MRU END MRU _ END of the write mapping cache 830. Referring to fig. 14B, the mapping cache 146 may remove the sixth mapping information M6 stored in the read mapping cache 850 and store the sixth mapping information M6 at the MRU END MRU _ END of the write mapping cache 830.

Fig. 14B illustrates a state in which the sixth mapping information M6 is moved to the MRU END MRU _ END of the write mapping cache 830. In fig. 14B, a first size write mapping information cache 830 may be full of mapping information. Accordingly, mapping cache 146 may output first mapping information M1 stored at LRU END LRU _ END of write mapping cache 830 to the outside of mapping cache 146. The mapping cache 146 may store the sixth mapping information M6 at the MRU END MRU _ END of the write mapping cache 830. Further, by removing the sixth mapping information M6 from the read map cache 850, an empty space may be generated in the read map cache 850.

The memory system 110 may delete the first mapping information M1 output from the mapping cache 146.

Fig. 9-14B illustrate a mapping cache 146 that stores newly accessed mapping information. However, the embodiments are not limited thereto. In another embodiment, mapping cache 146 may store the most frequently accessed mapping information according to a Least Frequently Used (LFU) scheme.

FIG. 15 illustrates a second example of a transaction between a host 102 and a memory system 110 in a data processing system according to one embodiment of the present disclosure.

Referring to fig. 15, the memory system 110 may transmit mapping information MAP INFO to the host 102. The memory system 110 may transmit the mapping information MAP INFO based on a RESPONSE to the COMMAND of the host 102. In particular, as described above with reference to fig. 9-13B, memory system 110 may provide mapping information output from read mapping cache 850 to host 102.

The response for transmitting the mapping information may not be particularly limited. For example, the memory system 110 may transmit the mapping information to the host 102 by using a response corresponding to a read command, a response corresponding to a write command, or a response corresponding to an erase command.

The memory system 110 and the host 102 may exchange commands and responses based on a unit format, which is set according to a predetermined protocol. For example, the format of the response may include a base header, commands generated due to the success or failure of commands transmitted by the host 102, and additional information indicating the status of the memory system 110. The memory system 110 may include the mapping information in the response and transmit the response including the mapping information to the host 102.

FIG. 16 illustrates a second operation of the host 102 and the memory system 110 according to one embodiment of the disclosure. Specifically, fig. 16 illustrates the following process: where the host 102 requests the mapping information from the memory system 110 and the memory system 110 transmits the mapping information in response to the request from the host 102.

Referring to fig. 16, the need for mapping information may occur in the host 102. The need for mapping information may occur, for example, when the host 102 may allocate space to store the mapping information, or when desired data is quickly input to or output from the memory system 110 in response to a command. Also, the need for mapping information may occur in the host 102 at the request of the user.

The host 102 may request the mapping information from the memory system 110, and the memory system 110 may prepare the mapping information in response to the request from the host 102. According to embodiments of the present disclosure, the host 102 may specifically request mapping information needed by the memory system 110. Meanwhile, according to another embodiment of the present disclosure, the host 102 may request the mapping information from the memory system 110, but which mapping information is provided may be determined by the memory system 110.

The memory system 110 may transmit the prepared mapping information to the host 102. The host 102 may store the mapping information transferred from the memory system 110 in an internal storage space (e.g., the host memory 106 described in fig. 3).

Using the stored mapping information, the host 102 may include the physical address PBA in the command and transfer the command including the physical address PBA to the memory system 110. The memory system 110 may perform the corresponding operation based on the physical address PBA included in the command.

FIG. 17 illustrates a third operation of the host 102 and the memory system 110 according to one embodiment of the disclosure. Specifically, fig. 17 illustrates the following process: where the memory system 110 requests the host 102 to transmit mapping information and the host 102 receives the mapping information in response to the memory system 110 request.

Referring to fig. 17, the memory system 110 may notify the host 102 to transfer the mapping information. In response to the notification about the mapping information transmitted from the memory system 110, the host 102 may determine whether the mapping information may be stored in the host 102. When the host 102 may receive the mapping information transmitted from the memory system 110, the host 102 may allow the memory system 110 to upload the mapping information to the host 102. The memory system 110 may prepare the mapping information and then transmit the mapping information to the host 102.

Subsequently, the host 102 may store the received mapping information in an internal storage space (e.g., the host memory 106 described in fig. 3). The host 102 may perform mapping operations based on the stored mapping information and include the physical address PBA in the command to be transmitted to the memory system 110.

The memory system 110 may check whether the command transferred from the host 102 includes the physical address PBA, and when the command transferred from the host 102 includes the physical address PBA, the memory system 110 may perform an operation corresponding to the command based on the physical address PBA.

With respect to the transmission of the mapping information, the operation of fig. 16 may differ from the operation of fig. 17 in that the second operation of the host 102 and the memory system 110 described above with reference to fig. 16 is initially performed by the host 102, while the third operation of the host 102 and the memory system 110 described above with reference to fig. 17 is initially performed by the memory system 110. According to an embodiment of the present disclosure, the memory system 110 and the host 102 may selectively use the method of transferring the mapping information described with reference to fig. 16 and 17 according to an operating environment.

FIG. 18 illustrates a fourth operation of the host 102 and the memory system 110 according to one embodiment of the disclosure. Specifically, fig. 18 illustrates the following case: wherein the memory system 110 transmits the mapping information to the host 102 when the host 102 and the memory system 110 are interlocked.

At S1862, the memory system 110 may complete operations corresponding to the COMMAND transmitted from the host 102.

After the operation corresponding to the COMMAND is completed, the memory system 110 may check whether there is mapping information to be transmitted to the host 102 before transmitting a RESPONSE corresponding to the COMMAND to the host at S1864.

When there is no mapping information to be transmitted to the host 102 ("no" at S1864), the memory system 110 may transmit a RESPONSE including information on whether an operation corresponding to the COMMAND transmitted from the host 102 is completed (success or failure) at S1866.

Meanwhile, when the memory system 110 has mapping information to be transferred to the host 102 ("yes" at S1864), at S1868, the memory system 110 may check whether a notification to transfer the mapping information is made. In this context, the notification may be similar to the notification described above with reference to fig. 17.

When the memory system 110 attempts to transmit the mapping information but the memory system 110 does not notify the host 102 of the transmission of the mapping information in advance (no at S1868), the memory system 110 may add a notification to the RESPONSE at S1870 and transmit the RESPONSE to the host 102.

In contrast, when a notification to transmit the mapping information has been made ("yes" at S1868), the memory system 110 may add the mapping information to the RESPONSE at S1872.

Subsequently, at S1874, the memory system 110 may transmit a RESPONSE including the mapping information to the host 102.

At S1842, the host 102 may receive at least one of a RESPONSE transmitted from the memory system 110, a RESPONSE WITH notify including the notification, and a RESPONSE WITH MAP INFO including the mapping information.

At S1844, the host 102 may check whether the received response includes a notification.

When the received response includes a notification ("yes" at S1844), the host 102 may prepare to receive and store mapping information that may be transmitted later at S1846.

Subsequently, at S1852, the host 102 may check a response corresponding to the previous command. For example, the host 102 may check the response to see if the results of the previous command succeeded or failed.

In contrast, when the received response does not include a notification (no at S1844), the host 102 may check whether the response includes mapping information at S1848.

When the response does not include the mapping information (no at S1848), the host 102 may check the response corresponding to the previous command at S1852.

In contrast, when the received response includes the mapping information ("yes" at S1848), the host 102 may store the mapping information included in the response in the internal storage space of the host 102 or update the mapping information already stored in the host 102 at S1850.

The memory system 110 including the map cache 146 according to an embodiment of the present disclosure may store frequently accessed map information or newly accessed map information in the map cache 146. In other words, the memory system 110 may store mapping information for data that frequently or recently undergoes read and write operations in the mapping cache 146, thereby reducing the burden of loading mapping information from the memory device 150. Memory system 110 may then provide host 102 with only the mapping information stored in write map cache 830 or read map cache 850 in map cache 146. Write map cache 830 and read map cache 850 may be implemented to have different sizes. In particular, the memory system 110 may not provide the host 102 with mapping information for data on which write operations are frequently or recently performed. As a result, the memory system 110 may reduce the burden on the host 102 by reducing the amount of mapping information provided to the host 102.

According to embodiments of the present disclosure, a memory system may efficiently search a mapping cache for mapping information and selectively provide updated mapping information to a host.

Although the present invention has been described with respect to particular embodiments, it will be apparent to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.

41页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于被访问次数的数据缓存方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类