Data processing system including multiple memory systems

文档序号:168353 发布日期:2021-10-29 浏览:21次 中文

阅读说明:本技术 包括多个存储器系统的数据处理系统 (Data processing system including multiple memory systems ) 是由 边谕俊 于 2021-01-18 设计创作,主要内容包括:本公开涉及一种数据处理设备,包括:第一存储器系统,包括第一和第二接口以及第一存储区域,通过第一接口联接到主机,并且被配置为将第一存储区域的逻辑到物理(L2P)映射的大小设置为第一大小单位;以及第二存储器系统,包括联接到第二接口以与第一存储器系统通信的第三接口,并且被配置为在初始操作时段期间,根据第一存储器系统的请求,将第二存储器系统中包括的第二存储区域的容量信息传输到第一存储器系统,并且在初始操作时段期间,响应于从第一存储器系统传输的映射设置命令,将第二存储区域的逻辑到物理(L2P)映射的大小设置为第二大小单位。(The present disclosure relates to a data processing apparatus comprising: a first memory system including first and second interfaces and a first storage region, coupled to a host through the first interface, and configured to set a size of a logical-to-physical (L2P) mapping of the first storage region to a first size unit; and a second memory system including a third interface coupled to the second interface to communicate with the first memory system, and configured to transmit capacity information of a second storage region included in the second memory system to the first memory system according to a request of the first memory system during an initial operation period, and set a size of a logical-to-physical (L2P) mapping of the second storage region to a second size unit in response to a mapping setting command transmitted from the first memory system during the initial operation period.)

1. A data processing apparatus comprising:

a first memory system including first and second interfaces and a first storage area, coupled to a host through the first interface, and setting a size of a logical-to-physical (L2P) map of the first storage area to a first size unit; and

a second memory system including a third interface coupled to the second interface to communicate with the first memory system, and transmitting capacity information of a second storage region included in the second memory system to the first memory system according to a request of the first memory system during an initial operation period, and setting a size of a logical-to-physical mapping (L2P mapping) of the second storage region to a second size unit in response to a mapping setting command transmitted from the first memory system during the initial operation period.

2. The data processing apparatus of claim 1, wherein the first memory system further: during the initial operation period, checking the capacity information of the second storage area, and setting the first size unit and the second size unit according to a result of the checking, the first size unit and the second size unit being different from each other.

3. The data processing apparatus of claim 2, wherein the first memory system sets the first size unit and the second size unit by:

when the second storage area is greater than or equal to the first storage area, generating a mapping setting command to set the second size unit to be greater than the first size unit and transmitting the generated mapping setting command to the second memory system, and

when the first storage area is larger than the second storage area, a mapping setting command that sets the first size unit to be larger than the second size unit is generated, and the generated mapping setting command is transmitted to the second memory system.

4. The data processing apparatus of claim 3, wherein the first memory system further:

analyzing an input command received from the host during a normal operation period following the initial operation period,

selecting the first memory system or the second memory system to process the input command according to a result of the analysis,

when the second memory system is selected to process the input command, a result of processing the input command is received from the second memory system, and

transmitting a result of processing the input command to the host.

5. The data processing device of claim 4,

wherein when the input command is a write command, the first memory system analyzes the input command by checking a pattern of write data corresponding to the write command,

wherein the first memory system selects the first memory system or the second memory system by storing the write data in the first storage area or the second storage area when the input command is the write command,

wherein, when the input command is a read command, the first memory system analyzes the input command by checking a logical address corresponding to the read command, and

wherein, when the input command is the read command, the first memory system selects the first memory system or the second memory system by reading read data from the first storage area or the second storage area.

6. The data processing device of claim 5,

wherein the write data smaller than a reference size is a random pattern and the write data larger than the reference size is a sequential pattern,

wherein when the second size unit is set larger than the first size unit, the first memory system stores the write data of the sequential pattern in the second storage area and the write data of the random pattern in the first storage area, and

wherein the first memory system stores the write data of the sequential pattern in the first storage area and stores the write data of the random pattern in the second storage area when the first size unit is set to be larger than the second size unit.

7. The data processing apparatus of claim 6, wherein the first memory system further:

setting a range of first logical addresses corresponding to the first storage region and a range of second logical addresses corresponding to the second storage region during the initial operation period, the range of first logical addresses and the range of second logical addresses being different from each other,

sharing the second logical address with the second memory system, and

sharing with the host a range of summed logical addresses obtained by summing the ranges of the first and second logical addresses.

8. The data processing apparatus according to claim 7, wherein in a case where a second input logical address corresponding to the read command is not detected in the intermediate mapping information, the first memory system reads the read data from the first storage area in response to the read command and the second input logical address when the second input logical address is included in the range of the first logical address, and reads the read data by transmitting the read command and the second input logical address to the second memory system to read the read data from the second storage area when the second input logical address is included in the range of the second logical address.

9. The data processing apparatus according to claim 8, wherein in a case where a fifth intermediate logical address mapped to the second input logical address is detected by referring to the intermediate mapping information, the first memory system reads the read data by transmitting the read command and the fifth intermediate logical address to the second memory system to read the read data from the second storage area when the fifth intermediate logical address is included in the range of the second logical address, and reads the read data from the first storage area in response to the read command and the fifth intermediate logical address when the fifth intermediate logical address is included in the range of the first logical address.

10. The data processing apparatus according to claim 1, wherein a size of the logical-to-physical (L2P) mapping of the first storage area is a size of information representing a mapping relationship between physical addresses and logical addresses of the first storage area; and is

Wherein a size of the logical-to-physical mapping, L2P mapping, of the second storage area is a size of information representing a mapping relationship between physical addresses and logical addresses of the second storage area.

11. A data processing apparatus comprising:

a main memory system including first to third interfaces and a main storage area, coupled to a host through the first interface, and having a logical-to-physical mapping, L2P, of the main storage area sized as a reference size unit;

a first sub-memory system including a fourth interface coupled to the second interface to communicate with the main memory system, and transferring first capacity information of a first storage area included in the first sub-memory system to the main memory system according to a request of the main memory system during an initial operation period, and setting a size of a logical-to-physical mapping, L2P mapping, of the first storage area to a first size unit in response to a first mapping setting command transferred from the main memory system during the initial operation period; and

a second sub-memory system including a fifth interface coupled to the third interface to communicate with the main memory system, and transferring second capacity information of a second storage area included in the second sub-memory system to the main memory system according to a request of the main memory system during the initial operation period, and setting a size of a logical-to-physical mapping, L2P mapping, of the second storage area to a second size unit in response to a second mapping setting command transferred from the main memory system during the initial operation period.

12. The data processing apparatus of claim 11, wherein the main memory system further: during the initial operation period, comparing the first capacity information with the second capacity information, and differently setting the first size unit and the second size unit within a range greater than the reference size unit according to a result of the comparison.

13. The data processing apparatus according to claim 12, wherein the main memory system sets the first size unit and the second size unit by:

generating a first mapping setup command and a second mapping setup command that set the first size unit to be larger than the second size unit when the first storage area is larger than the second storage area, and transmitting the generated first and second mapping setup commands to the first and second sub memory systems;

generating a first mapping setup command and a second mapping setup command that set the second size unit to be larger than the first size unit when the second storage area is larger than the first storage area, and transmitting the generated first and second mapping setup commands to the first and second sub memory systems; and is

When the sizes of the first and second storage areas are the same, generating a first and second mapping setting command that sets one of the first and second size units to be larger than the other, and transmitting the generated first and second mapping setting commands to the sub first and second memory systems.

14. The data processing apparatus of claim 13, wherein the main memory system further:

analyzing an input command transmitted from the host during a normal operation period following the initial operation period,

selecting the main memory system, the first sub-memory system or the second sub-memory system to process the input command according to the result of the analysis, and

when the first sub-memory system or the second sub-memory system is selected to process the input command, a result of processing the input command is received from the selected sub-memory system, and

transmitting a result of processing the input command to the host.

15. The data processing device of claim 14,

wherein, when the input command is a write command, the main memory system analyzes the input command by checking a pattern of write data corresponding to the write command,

wherein, when the input command is a write command, the main memory system selects the main memory system, the first sub memory system, or the second sub memory system by storing the write data in any one of the main memory area, the first storage area, and the second storage area, and

when the input command is a read command, the main memory system analyzes the input command by checking a logical address corresponding to the read command, and

wherein, when the input command is a read command, the main memory system selects the main memory system, the first sub memory system, or the second sub memory system by reading read data from any one of the main memory area, the first memory area, and the second memory area.

16. The data processing device of claim 15,

wherein the write data smaller than a first reference size is a random pattern and the write data larger than the first reference size is a sequential pattern,

wherein the sequential pattern of write data smaller than a second reference size is a first sequential pattern and the sequential pattern of write data larger than the second reference size is a second sequential pattern,

wherein the main memory system stores the random pattern of write data in the main storage area,

wherein when the second size unit is set larger than the first size unit, the main memory system stores the write data of the first sequential pattern in the first storage area and stores the write data of the second sequential pattern in the second storage area, and

wherein, when the first size unit is set larger than the second size unit, the main memory system stores the write data of the second sequential pattern in the first storage area and stores the write data of the first sequential pattern in the second storage area.

17. The data processing apparatus of claim 16, wherein the main memory system further:

setting a range of a main logical address corresponding to the main storage area, a range of a first logical address corresponding to the first storage area, and a range of a second logical address corresponding to the second storage area differently during the initial operation period,

sharing the first logical address with the first sub-memory system,

sharing the second logical address with the second sub-memory system, and

sharing with the host a range of summed logical addresses obtained by summing the ranges of the master logical address, the first logical address, and the second logical address.

18. The data processing apparatus according to claim 17, wherein in a case where a second input logical address corresponding to the read command is not detected in the intermediate mapping information, the main memory system reads the read data from the main storage area in response to the read command and the second input logical address when the second input logical address is included in the range of the main logical address, reads the read data by transmitting the read command and the second input logical address to the first sub-memory system to read the read data from the first storage area when the second input logical address is included in the range of the first logical address, and reads the read data by transmitting the read command and the second input logical address to the second sub-memory system to read the read data from the second storage area when the second input logical address is included in the range of the second logical address The second storage area reads the read data to read the read data.

19. The data processing apparatus according to claim 18, wherein, in a case where a fifth intermediate logical address mapped to the second input logical address is detected by referring to the intermediate mapping information, the main memory system when the fifth intermediate logical address is included in the range of first logical addresses, reading the read data by transmitting the read command and the fifth intermediate logical address to the first sub-memory system to read the read data from the first storage area, and when the fifth intermediate logical address is included in the range of second logical addresses, reading the read data by transmitting the read command and the fifth intermediate logical address to the second sub-memory system to read the read data from the second storage area.

20. The data processing apparatus according to claim 11, wherein a size of said logical-to-physical mapping, L2P, of said main storage area is a size of information representing a mapping relationship between physical addresses and logical addresses of said main storage area,

wherein a size of the logical-to-physical mapping, L2P mapping, of the first storage area is a size of information representing a mapping relationship between physical addresses and logical addresses of the first storage area, and

wherein a size of the logical-to-physical mapping, L2P mapping, of the second storage area is a size of information representing a mapping relationship between physical addresses and logical addresses of the second storage area.

Technical Field

Various embodiments relate to a data processing system, and more particularly, to a data processing system including a plurality of memory systems and a method of operating a data processing system.

Background

Recently, computer environment paradigms have turned into pervasive computing that enables near anytime and anywhere access to computer systems. Accordingly, the use of portable electronic devices such as mobile phones, digital cameras, notebook computers, and the like has increased. Such portable electronic devices typically use or include a memory system, e.g., a data storage device, that includes or is embedded in at least one memory device. The data storage device may be used as a primary storage device or a secondary storage device for the portable electronic device.

Because non-volatile semiconductor memory devices have excellent stability and durability due to the lack of mechanical drive components (e.g., robotic arms in a hard disk), and may exhibit high data access speeds and low power consumption, computing devices benefit from data storage implemented in the form of non-volatile semiconductor memory devices. Examples of the semiconductor data storage device include a Universal Serial Bus (USB) memory device, a memory card having various interfaces, and/or a Solid State Drive (SSD).

Disclosure of Invention

Various embodiments relate to a data processing system capable of efficiently managing a plurality of memory systems.

In an embodiment, a data processing apparatus may include: a first memory system including first and second interfaces and a first storage region, coupled to a host through the first interface, and configured to set a size of a logical-to-physical (L2P) mapping of the first storage region to a first size unit; and a second memory system including a third interface coupled to the second interface to communicate with the first memory system, and configured to transmit capacity information of a second storage region included in the second memory system to the first memory system according to a request of the first memory system during an initial operation period, and set a size of a logical-to-physical (L2P) mapping of the second storage region to a second size unit in response to a mapping setting command transmitted from the first memory system during the initial operation period.

The first memory system may be further configured to: during an initial operation period, capacity information of the second storage area is checked, and a first size unit and a second size unit, which are different from each other, are set according to a result of the checking.

The first memory system may set the first and second size units by: generating a mapping setting command to set a second size unit to be larger than the first size unit when the second storage area is larger than or equal to the first storage area, and transmitting the generated mapping setting command to the second memory system; and when the first storage area is larger than the second storage area, generating a mapping setting command that sets the first size unit to be larger than the second size unit, and transmitting the generated mapping setting command to the second memory system.

The first memory system may be further configured to: analyzing an input command received from the host during a normal operation period after the initial operation period; selecting the first or second memory system to process the input command according to the result of the analysis; receiving a result of processing the input command from the second memory system when the second memory system is selected to process the input command; and transmits a result of processing the input command to the host.

When the input command is a write command, the first memory system may analyze the input command by checking a pattern of write data corresponding to the write command. When the input command is a write command, the first memory system may select the first or second memory system by storing write data in the first or second storage area. When the input command is a read command, the first memory system may analyze the input command by checking a logical address corresponding to the read command. When the input command is a read command, the first memory system may select the first or second memory system by reading read data from the first or second memory region.

The write data smaller than the reference size may be a random pattern, and the write data larger than the reference size may be a sequential pattern. When the second size unit is set to be larger than the first size unit, the first memory system may store the write data of the sequential pattern in the second storage area, and may store the write data of the random pattern in the first storage area. When the first size unit is set to be larger than the second size unit, the first memory system may store the write data of the sequential pattern in the first storage area, and may store the write data of the random pattern in the second storage area.

The first memory system may be further configured to: setting a range of first logical addresses corresponding to the first storage region and a range of second logical addresses corresponding to the second storage region during an initial operation period, the range of first logical addresses and the range of second logical addresses being different from each other; the second logical address is shared with the second memory system, and a range of summed logical addresses obtained by summing the ranges of the first logical address and the second logical address is shared with the host.

When the second size unit is set to be greater than the first size unit, the write data may be in a sequential mode, and the first input logical address corresponding to the write command is included in the range of the second logical address, the first memory system may store the write data by transmitting the write command and the first input logical address to the second memory system to store the write data in the second storage area. When the second size unit is set to be larger than the first size unit, the write data may be in a random pattern, and the first input logical address is included in the range of the first logical address, the first memory system may store the write data in the first storage area in response to the write command and the first input logical address. When the first size unit is set to be larger than the second size unit, the write data may be in a sequential mode, and the first input logical address is included in the range of the first logical address, the first memory system may store the write data in the first storage area in response to the write command and the first input logical address. When the first size unit is set to be larger than the second size unit, the write data may be in a random pattern, and the first input logical address is included in the range of the second logical address, the first memory system may store the write data by transmitting the write command and the first input logical address to the second memory system to store the write data in the second storage area.

When the second size unit is set to be larger than the first size unit, the write data may be in a sequential mode, and the first input logical address is included in the range of the first logical address, the first memory system may store the write data by: managing a first intermediate logical address included in the range of the second logical address as intermediate mapping information by mapping the first intermediate logical address to the first input logical address; and transmitting the write command and the first intermediate logical address to the second memory system to store the write data in the second storage area. When the second size unit is set to be larger than the first size unit, the write data may be in a random pattern, and the first input logical address is included in a range of the second logical address, the first memory system storing the write data by: managing a second intermediate logical address included in the range of the first logical address as intermediate mapping information by mapping the second intermediate logical address to the first input logical address; and storing the write data in the first storage area in response to the write command and the second intermediate logical address. When the first size unit is set to be larger than the second size unit, the write data may be in a sequential mode, and the first input logical address is included in the range of the second logical address, the first memory system may store the write data by: managing a third intermediate logical address included in the range of the first logical address as intermediate mapping information by mapping the third intermediate logical address to the first input logical address; and storing the write data in the first storage area in response to the write command and the third intermediate logical address. When the first size unit is set to be larger than the second size unit, the write data may be in a random pattern, and the first input logical address is included in the range of the first logical address, the first memory system may store the write data by: managing a fourth intermediate logical address included in the range of the second logical address as intermediate mapping information by mapping the fourth intermediate logical address to the first input logical address; and transmitting the write command and the fourth intermediate logical address to the second memory system to store the write data in the second storage area.

In a case where the second input logical address corresponding to the read command is not detected in the intermediate mapping information, the first memory system may read the read data from the first storage area in response to the read command and the second input logical address when the second input logical address is included in the range of the first logical address, and may read the read data by transmitting the read command and the second input logical address to the second memory system to read the read data from the second storage area when the second input logical address is included in the range of the second logical address.

In the case where a fifth intermediate logical address mapped to the second input logical address is detected by referring to the intermediate mapping information, the first memory system may read the read data by transmitting the read command and the fifth intermediate logical address to the second memory system to read the read data from the second memory region when the fifth intermediate logical address is included in the range of the second logical address, and may read the read data from the first memory region in response to the read command and the fifth intermediate logical address when the fifth intermediate logical address is included in the range of the first logical address.

The size of the logical-to-physical (L2P) mapping of the first storage area may be the size of information indicating the mapping relationship between the physical addresses and the logical addresses of the first storage area. The size of the logical-to-physical (L2P) mapping of the second storage area may be the size of information indicating the mapping relationship between the physical addresses and the logical addresses of the second storage area.

In an embodiment, a data processing apparatus may include: a main memory system including first to third interfaces and a main storage area, coupled to a host through the first interface, and configured to set a size of a logical-to-physical (L2P) mapping of the main storage area to a reference size unit; a first sub-memory system including a fourth interface coupled to the second interface to communicate with the main memory system and configured to transfer first capacity information of a first storage area included in the first sub-memory system to the main memory system according to a request of the main memory system during an initial operation period and set a size of a logical-to-physical (L2P) mapping of the first storage area to a first size unit in response to a first mapping setting command transferred from the main memory system during the initial operation period; and a second sub-memory system including a fifth interface coupled to the third interface to communicate with the main memory system, and configured to transfer second capacity information of a second storage area included in the second sub-memory system to the main memory system according to a request of the main memory system during an initial operation period, and set a size of a logical-to-physical (L2P) mapping of the second storage area to a second size unit in response to a second mapping setting command transferred from the main memory system during the initial operation period.

The main memory system may be further configured to: during an initial operation period, the first and second capacity information are compared, and the first size unit and the second size unit are differently set within a range greater than the reference size unit according to a result of the comparison.

The main memory system may set the first and second size units by: generating first and second mapping setting commands that set the first size unit to be larger than the second size unit when the first storage area is larger than the second storage area, and transmitting the generated first and second mapping setting commands to the first and second sub memory systems; generating first and second mapping setting commands that set the second size unit to be larger than the first size unit when the second storage area is larger than the first storage area, and transmitting the generated first and second mapping setting commands to the first and second sub memory systems; and when the sizes of the first and second storage areas are the same, generating first and second mapping setting commands that set one of the first and second size units to be larger than the other, and transmitting the generated first and second mapping setting commands to the first and second sub memory systems.

The main memory system may be further configured to: analyzing an input command transmitted from the host during a normal operation period after the initial operation period; selecting the main memory system, the first sub memory system or the second sub memory system to process the input command according to the analysis result; and when either the first or second sub-memory system is selected to process the input command, receiving a result of processing the input command from the selected sub-memory system and transmitting the result of processing the input command to the host.

When the input command is a write command, the main memory system may analyze the input command by checking a pattern of write data corresponding to the write command. When the input command is a write command, the main memory system may select the main memory system, the first sub memory system, or the second sub memory system by storing write data in any one of the main memory area, the first memory area, and the second memory area. When the input command is a read command, the main memory system may analyze the input command by checking a logical address corresponding to the read command. When the input command is a read command, the main memory system may select the main memory system, the first sub memory system, or the second sub memory system by reading read data from any one of the main memory area, the first memory area, and the second memory area.

The write data smaller than the first reference size may be a random pattern, and the write data larger than the first reference size may be a sequential pattern. The write data of the sequential pattern smaller than the second reference size may be a first sequential pattern, and the write data of the sequential pattern larger than the second reference size may be a second sequential pattern. The main memory system may store a random pattern of write data in the main storage area. When the second size unit is set to be larger than the first size unit, the main memory system may store the write data of the first sequential pattern in the first storage area, and may store the write data of the second sequential pattern in the second storage area. When the first size unit is set to be larger than the second size unit, the main memory system may store the write data of the second sequential pattern in the first storage area, and may store the write data of the first sequential pattern in the second storage area.

The main memory system may be further configured to: setting a range of a main logical address corresponding to a main storage area, a range of a first logical address corresponding to a first storage area, and a range of a second logical address corresponding to a second storage area differently during an initial operation period; the first logical address is shared with the first sub memory system, the second logical address is shared with the second sub memory system, and a range of a sum logical address obtained by summing ranges of the main logical address, the first logical address, and the second logical address is shared with the host.

When the second size unit is set to be greater than the first size unit, the write data may be a first sequential pattern, and the first input logical address corresponding to the write command is included in the range of the first logical address, the main memory system may store the write data by transmitting the write command and the first input logical address to the first sub-memory system to store the write data in the first storage area. When the second size unit is set to be larger than the first size unit, the write data may be in the second sequential mode, and the first input logical address is included in the range of the second logical address, the main memory system may store the write data by transmitting the write command and the first input logical address to the second sub-memory system to store the write data in the second storage area. When the first size unit is set to be larger than the second size unit, the write data may be in a first sequential mode, and the first input logical address is included in the range of the second logical address, the main memory system may store the write data by transmitting the write command and the first input logical address to the second sub-memory system to store the write data in the second storage area. When the first size unit is set to be larger than the second size unit, the write data may be in the second sequential mode, and the first input logical address is included in the range of the first logical address, the main memory system may store the write data by transmitting the write command and the first input logical address to the first sub-memory system to store the write data in the first storage area.

When the second size unit is set to be larger than the first size unit, the write data may be in the second sequential mode, and the first input logical address is included in the range of the first logical address, the main memory system may store the write data by: managing a first intermediate logical address included in the range of the second logical address as intermediate mapping information by mapping the first intermediate logical address to the first input logical address; and transmitting the write command and the first intermediate logical address to the second sub-memory system to store the write data in the second storage area. When the second size unit is set to be larger than the first size unit, the write data may be in the first order mode, and the first input logical address is included in the range of the second logical address, the main memory system may store the write data by: managing a second intermediate logical address included in the range of the first logical address as intermediate mapping information by mapping the second intermediate logical address to the first input logical address; and transmitting the write command and the second intermediate logical address to the first sub-memory system to store the write data in the first storage area. When the first size unit is set to be larger than the second size unit, the write data may be in a first sequential mode, and the first input logical address is included in the range of the first logical address, the main memory system may store the write data by: managing a third intermediate logical address included in the range of the second logical address as intermediate mapping information by mapping the third intermediate logical address to the first input logical address; and transmitting the write command and the third intermediate logical address to the second sub-memory system to store the write data in the second storage area. When the first size unit is set to be larger than the second size unit, the write data may be in the second sequential mode, and the first input logical address is included in the range of the second logical address, the main memory system may store the write data by: managing a fourth intermediate logical address included in the range of the first logical address as intermediate mapping information by mapping the fourth intermediate logical address to the first input logical address; and transmitting the write command and the fourth intermediate logical address to the first sub-memory system to store the write data in the first storage area.

In a case where the second input logical address corresponding to the read command is not detected in the intermediate mapping information, the main memory system may read the read data from the main storage area in response to the read command and the second input logical address when the second input logical address is included in the range of the main logical address, may read the read data by transmitting the read command and the second input logical address to the first sub-memory system to read the read data from the first storage area when the second input logical address is included in the range of the first logical address, and may read the read data by transmitting the read command and the second input logical address to the second sub-memory system to read the read data from the second storage area when the second input logical address is included in the range of the second logical address.

In a case where a fifth intermediate logical address mapped to the second input logical address is detected by referring to the intermediate mapping information, the main memory system may read the read data by transmitting the read command and the fifth intermediate logical address to the first sub-memory system to read the read data from the first storage area when the fifth intermediate logical address is included in the range of the first logical address, and may read the read data by transmitting the read command and the fifth intermediate logical address to the second sub-memory system to read the read data from the second storage area when the fifth intermediate logical address is included in the range of the second logical address.

The size of the logical-to-physical (L2P) mapping of the main storage area may be the size of the information representing the mapping between the physical addresses and the logical addresses of the main storage area. The size of the logical-to-physical (L2P) mapping of the first storage area may be the size of information indicating the mapping relationship between the physical addresses and the logical addresses of the first storage area. The size of the logical-to-physical (L2P) mapping of the second storage area may be the size of information indicating the mapping relationship between the physical addresses and the logical addresses of the second storage area.

In an embodiment, a data processing system may include: a host adapted to provide together one of the first and second requests and one of the first and second logical addresses in the first and second ranges, respectively; a first system adapted to access a first storage device in units of a first size based on a first mapping relationship between a first logical address and a first physical address corresponding to the first size in response to a first request; and a second system adapted to access, in response to a second request, a second storage device in units of a second size based on a second mapping relationship between a second logical address and a second physical address corresponding to the second size. The first system may be further adapted to access the first storage device in units of the first size based on a third mapping between the second logical address and a first intermediate logical address in the first range and a fourth mapping between the first intermediate logical address and the first physical address in response to the first request. The second system may be further adapted to access the second storage in units of the second size in response to the second request based on a fifth mapping between the first logical address and a second intermediate logical address in the second range and a sixth mapping between the second intermediate logical address and the second physical address.

In the present technology, when at least two physically separated memory systems are coupled to a host, at least two logical addresses corresponding to the at least two memory systems may be summed into one to be shared with the host, and thus the host may logically use the at least two physically separated memory systems like one memory system.

In addition, in the present technology, when at least two physically separated memory systems are coupled to a host, respective roles of the at least two memory systems may be determined according to a coupling relationship with the host, and a size unit and a pattern of data to be stored in the at least two memory systems, respectively, may be determined differently from each other according to the determined roles. By doing so, when at least two memory systems that are physically separated are used logically similar to one memory system, data can be processed more efficiently.

Drawings

Fig. 1A-1C illustrate a data processing system including multiple memory systems according to an embodiment of the present disclosure.

FIG. 2 illustrates setup operations and command processing operations of a data processing system according to an embodiment of the present disclosure.

FIG. 3 illustrates an example of command processing operations of a data processing system according to an embodiment of the present disclosure.

FIG. 4 illustrates another example of a command processing operation of a data processing system according to an embodiment of the present disclosure.

FIG. 5 illustrates operations in a data processing system to manage at least a plurality of memory systems based on logical addresses according to embodiments of the present disclosure.

Fig. 6A and 6B illustrate examples of logical address based command processing operations in a data processing system according to embodiments of the present disclosure.

FIG. 7 illustrates another example of a logical address based command processing operation in a data processing system according to an embodiment of the present disclosure.

Fig. 8A-8D illustrate a data processing system including multiple memory systems according to an embodiment of the present disclosure.

Fig. 9A and 9B illustrate setup operations and command processing operations of a data processing system according to an embodiment of the present disclosure.

FIG. 10 illustrates an example of command processing operations of a data processing system according to an embodiment of the present disclosure.

FIG. 11 illustrates another example of a command processing operation of a data processing system according to an embodiment of the present disclosure.

FIG. 12 illustrates operations in a data processing system to manage at least a plurality of memory systems based on logical addresses according to embodiments of the present disclosure.

Fig. 13A and 13B illustrate examples of logical address based command processing operations in a data processing system according to embodiments of the present disclosure.

FIG. 14 illustrates another example of a logical address based command processing operation in a data processing system according to an embodiment of the present disclosure.

Detailed Description

Various examples of the disclosure are described in more detail below with reference to the accompanying drawings. However, aspects and features of the present invention may be implemented differently to form other embodiments including variations of any of the embodiments disclosed. Accordingly, the present invention is not limited to the embodiments set forth herein. Rather, the described embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the disclosure to those skilled in the art. Throughout the disclosure, like reference numerals refer to like parts in the various figures and examples of the disclosure. It is noted that references to "an embodiment," "another embodiment," etc., are not necessarily intended to mean only one embodiment, and different references to any such phrase are not necessarily intended to refer to the same embodiment.

It will be understood that, although the terms first, second, third, etc. may be used herein to identify various elements, these elements are not limited by these terms. These terms are used to distinguish one element from another element having the same or similar name. Thus, a first element in one instance may be termed a second or third element in another instance, without indicating any change in the elements themselves.

The drawings are not necessarily to scale and, in some instances, proportions may have been exaggerated in order to clearly illustrate features of embodiments. When an element is referred to as being connected or coupled to another element, it will be understood that the former may be directly connected or coupled to the latter, or electrically connected or coupled to the latter via one or more intervening elements therebetween. In addition, it will also be understood that when an element is referred to as being "between" two elements, it can be the only element between the two elements, or one or more intervening elements may also be present.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms and vice versa, unless the context clearly indicates otherwise. Similarly, the indefinite articles "a" and "an" mean one or more, unless clearly indicated by the language or context, only one.

It will be further understood that the terms "comprises," "comprising," "includes" and "including," when used in this specification, specify the presence of stated elements and do not preclude the presence or addition of one or more other elements. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.

Unless defined otherwise, all terms including technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs in view of this disclosure. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the present disclosure and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be practiced without some or all of these specific details. In other instances, well known process structures and/or processes have not been described in detail in order to not unnecessarily obscure the present invention.

It is also noted that, in some instances, features or elements described in connection with one embodiment may be used alone or in combination with other features or elements of another embodiment, as would be apparent to one of ordinary skill in the relevant art, unless specifically noted otherwise.

Embodiments of the present disclosure are described in detail below with reference to the drawings, wherein like reference numerals refer to like elements.

First embodiment

Fig. 1A-1C illustrate a data processing system including multiple memory systems according to an embodiment of the present disclosure.

Referring to FIG. 1A, a data processing system 100 according to an embodiment of the present disclosure may include a host 102 and a plurality of memory systems 110 and 120.

According to an embodiment, the plurality of memory systems 110 and 120 may include two memory systems, i.e., a first memory system 110 and a second memory system 120.

The host 102 may transmit a plurality of commands corresponding to the user request to the plurality of memory systems 110 and 120, and the plurality of memory systems 110 and 120 may perform a plurality of command operations corresponding to the plurality of commands, that is, operations corresponding to the user request.

The multiple memory systems 110 and 120 may operate in response to requests by the host 102, and in particular, may store data to be accessed by the host 102. In other words, one or both of the plurality of memory systems 110 and 120 may serve as a primary or secondary memory device for the host 102. Each of the plurality of memory systems 110 and 120 may be implemented as any of various types of memory devices, depending on the host interface protocol coupled to the host 102. For example, each of the memory systems 110 and 120 may be implemented as a Solid State Drive (SSD), a multimedia card in the form of an MMC, an eMMC (embedded MMC), an RS-MMC (reduced-size MMC), and a micro MMC, a secure digital card in the form of an SD, a mini SD, and a micro SD, a Universal Serial Bus (USB) storage device, a universal flash memory (UFS) device, a Compact Flash (CF) card, a smart media card, and/or a memory stick.

Each of the memory systems 110 and 120 may be integrated into one semiconductor device to configure a memory card such as the following: personal Computer Memory Card International Association (PCMCIA) cards, Compact Flash (CF) cards, smart media cards in the form of SM and SMC, memory sticks, multimedia cards in the form of MMC, RS-MMC and micro MMC, secure digital cards in the form of SD, mini SD, micro SD and SDHC, and/or Universal Flash (UFS) devices.

In another embodiment, each of the memory systems 110 and 120 may configure a computer, an ultra mobile pc (umpc), a workstation, a netbook, a Personal Digital Assistant (PDA), a portable computer, a web tablet, a wireless phone, a mobile phone, a smart phone, an electronic book, a Portable Multimedia Player (PMP), a portable game machine, a navigation device, a black box, a digital camera, a Digital Multimedia Broadcasting (DMB) player, a three-dimensional television, a smart television, a digital audio recorder, a digital audio player, a digital picture recorder, a digital picture player, a digital video recorder, a digital video player, a storage device configuring a data center, a device capable of transmitting and receiving information in a wireless environment, one of various electronic devices configuring a home network, one of various electronic devices configuring a computer network, a computer system, and a computer system, One of various electronic devices configuring a telematics network, an RFID (radio frequency identification) device, or one of various constituent elements configuring a computing system.

The first MEMORY system 110 may include a first MEMORY region (MEMORY SPACE1) 1102. The second MEMORY system 120 may include a second MEMORY region (MEMORY SPACE2) 1202. Each of the first and second storage regions 1102, 1202 may include storage devices such as: volatile memory devices such as Dynamic Random Access Memory (DRAM) and Static Random Access Memory (SRAM), or non-volatile memory devices such as Read Only Memory (ROM), mask ROM (mrom), programmable ROM (prom), erasable programmable ROM (eprom), electrically erasable programmable ROM (eeprom), ferroelectric ram (fram), phase change ram (pram), magnetic ram mram (mram), resistive ram (rram), and flash memory.

The first memory system 110 may be coupled directly to the host 102. That is, the first memory system 110 may receive write data according to a request of the host 102 and store the write data in the first storage area 1102. Also, the first memory system 110 may read data stored in the first storage region 1102 according to a request of the host 102 and output the read data to the host 102.

The second memory system 120 may be directly coupled to the first memory system 110, but may not be directly coupled to the host 102. That is, when commands and data are transferred between the second memory system 120 and the host 102, the commands and data may be transferred through the first memory system 110. For example, when receiving write data according to a request of the host 102, the second memory system 120 may receive the write data through the first memory system 110. Of course, the second memory system 120 may store the write data of the host 102 received through the first memory system 110 in the second storage area 1202 included in the second memory system 120. Similarly, when read data stored in the second storage region 1202 included in the second memory system 120 is read and output to the host 102 according to a request of the host 102, the second memory system 120 may output the read data to the host 102 through the first memory system 110.

Referring to fig. 1B, a detailed configuration of the first memory system 110 is shown.

The first memory system 110 includes: a memory device, i.e., a first nonvolatile memory device 1501, which stores data to be accessed by the host 102; and a first controller 1301 which controls data storage to the first nonvolatile memory device 1501.

The first non-volatile memory device 1501 may be configured to be the same as the first storage region 1102 included in the first memory system 110 described above with reference to fig. 1A.

The first controller 1301 controls the first nonvolatile memory device 1501 in response to a request from the host 102. For example, the first controller 1301 supplies data read from the first nonvolatile memory device 1501 to the host 102, and stores data supplied from the host 102 in the first nonvolatile memory device 1501. To this end, the first controller 1301 controls operations of the first nonvolatile memory device 1501, such as a read operation, a write operation, a program operation, and an erase operation.

In detail, the first controller 1301 included in the first memory system 110 may include: a first INTERFACE (INTERFACE1)1321, a PROCESSOR (PROCESSOR)1341, an Error Correction Code (ECC) component (hereinafter abbreviated as ECC)1381, a Power Management Unit (PMU)1401, a MEMORY INTERFACE (MEMORY INTERFACE)1421, a MEMORY (MEMORY)1441, and a second INTERFACE (INTERFACE2) 131.

The first interface 1321 performs operations to exchange commands and data to be transferred between the first system 110 and the host 102, and may be configured to communicate with the host 102 through at least one of various interface protocols such as: universal Serial Bus (USB), multimedia card (MMC), peripheral component interconnect express (PCI-E), serial SCSI (sas), Serial Advanced Technology Attachment (SATA), Parallel Advanced Technology Attachment (PATA), Small Computer System Interface (SCSI), Enhanced Small Disk Interface (ESDI), electronic Integrated Drive (IDE), and MIPI (mobile industrial processor interface). As an area for exchanging data with the host 102, the first interface 1321 may be driven by firmware called a Host Interface Layer (HIL).

ECC 1381 may correct erroneous bits of data processed in the first non-volatile memory device 1501 and may include an ECC encoder and an ECC decoder. The ECC encoder may perform error correction encoding on data to be programmed to the first nonvolatile memory device 1501, and may thereby generate data to which parity bits are added. The data to which the parity bit is added may be stored in the first nonvolatile memory device 1501. When reading data stored in the first non-volatile memory device 1501, the ECC decoder detects and corrects errors in the data read from the first non-volatile memory device 1501. In other words, after performing error correction decoding on data read from the first nonvolatile memory device 1501, the ECC 1381 may determine whether the error correction decoding has succeeded, may output a signal indicating the determination, for example, an error correction success/failure signal, and may correct an error bit of the read data by using a parity bit generated in an ECC encoding process. If the number of error bits that have occurred is equal to or greater than the error bit correction limit, ECC 1381 may not correct the error bits and may output an error correction failure signal indicating that the error bits cannot be corrected.

ECC 1381 may perform error correction by using an LDPC (low density parity check) code, a BCH (Bose-Chaudhuri-Hocquenghem) code, a turbo code, a Reed-Solomon (Reed-Solomon) code, a convolutional code, an RSC (recursive systematic code), or a coded modulation such as TCM (trellis coded modulation) or BCM (block coded modulation). However, error correction is not limited to these techniques. To this end, ECC 1381 may include suitable hardware and software for error correction.

The PMU 1401 provides and manages power of the first controller 1301, that is, power of components included in the first controller 1301.

The memory interface 1421 serves as a memory/storage interface that performs an interface connection between the first controller 1301 and the first nonvolatile memory device 1501 to allow the first controller 1301 to control the first nonvolatile memory device 1501 in response to a request from the host 102. When the first nonvolatile memory device 1501 is a flash memory, particularly a NAND flash memory, as a NAND Flash Controller (NFC), the memory interface 1421 generates a control signal of the first nonvolatile memory device 1501 and processes data under the control of the processor 1341.

The memory interface 1421 may support an operation of an interface that processes commands and data between the first controller 1301 and the first nonvolatile memory device 1501, such as an operation of a NAND flash interface, in particular, data input/output between the first controller 1301 and the first nonvolatile memory device 1501, and as an area that exchanges data with the first nonvolatile memory device 1501, may be driven by firmware called a Flash Interface Layer (FIL).

The second interface 131 may be an interface that processes commands and data between the first controller 1301 and the second memory system 120, i.e., a system interface that performs an interface connection between the first memory system 110 and the second memory system 120. The second interface 131 may transfer commands and data between the first memory system 110 and the second memory system 120 under the control of the processor 1341.

The memory 1441, which is a working memory of the first memory system 110 and the first controller 1301, stores data for driving the first memory system 110 and the first controller 1301. In detail, when the first controller 1301 controls the first nonvolatile memory apparatus 1501 in response to a request from the host 102, for example, when the first controller 1301 controls operations of the first nonvolatile memory apparatus 1501 such as a read operation, a write operation, a program operation, and an erase operation, the memory 1441 temporarily stores data that should be managed. Further, the memory 1441 may temporarily store data that should be managed when commands and data are transferred between the first controller 1301 and the second memory system 120.

The memory 1441 may be implemented by a volatile memory. For example, the memory 1441 may be implemented by a Static Random Access Memory (SRAM) or a Dynamic Random Access Memory (DRAM).

The memory 1441 may be provided inside the first controller 1301 as shown in fig. 1B, or may be provided outside the first controller 1301. When the memory 1441 is provided outside the first controller 1301, the memory 1441 should be implemented by a separate external volatile memory operatively coupled to exchange data with the first controller 1301 through a separate memory interface (not shown).

The memory 1441 may store data that should be managed in a process of controlling the operation of the first nonvolatile memory device 1501 and a process of transferring data between the first memory system 110 and the second memory system 120. To store such data, memory 1441 may include program memory, data memory, write buffer/cache, read buffer/cache, data buffer/cache, map buffer/cache, and the like.

The processor 1341 controls all operations of the first system 110, and in particular, controls a program operation or a read operation for the first non-volatile memory device 1501 in response to a write request or a read request from the host 102. The processor 1341 drives firmware called a Flash Translation Layer (FTL) to control general operations of the first memory system 110 with respect to the first non-volatile memory device 1501. The processor 1341 may be implemented by a microprocessor or a Central Processing Unit (CPU).

For example, the first controller 1301 executes an operation requested from the host 102 in the first nonvolatile memory device 1501, that is, a command operation corresponding to a command received from the host 102 by the processor 1341 using the first nonvolatile memory device 1501. The first controller 1301 may perform a foreground operation, which is a command operation corresponding to a command received from the host 102, such as a program operation corresponding to a write command, a read operation corresponding to a read command, an erase operation corresponding to an erase command, or a parameter setting operation corresponding to a set parameter command or a set feature command, which is a set command.

The first controller 1301 may perform a background operation with respect to the first nonvolatile memory device 1501 through the processor 1341. Background operations for the first non-volatile MEMORY device 1501 may include operations to copy data stored in a MEMORY BLOCK among MEMORY BLOCKs MEMORY BLOCK <0, 1, 2,. > of the first non-volatile MEMORY device 1501 to another MEMORY BLOCK, e.g., Garbage Collection (GC) operations. Background operations for the first non-volatile MEMORY device 1501 may include operations, such as Wear Leveling (WL) operations, that exchange stored data between MEMORY BLOCKs <0, 1, 2, ·> of the first non-volatile MEMORY device 1501. Background operations for the first non-volatile MEMORY device 1501 may include operations to store mapping data stored in the first controller 1301 in a MEMORY BLOCK MEMORY <0, 1, 2,. > of the first non-volatile MEMORY device 1501, such as a map flush operation. Background operations for the first non-volatile MEMORY device 1501 may include bad management (bad management) operations for the first non-volatile MEMORY device 1501, for example, bad BLOCK management operations that check and process bad BLOCKs among a plurality of MEMORY BLOCKs MEMORY BLOCK <0, 1, 2,. > included in the first non-volatile MEMORY device 1501.

The first controller 1301 may generate and manage log data corresponding to an operation of accessing the MEMORY BLOCK <0, 1, 2,. that > of the first nonvolatile MEMORY device 1501 by the processor 1341. An operation that accesses a MEMORY BLOCK MEMORY <0, 1, 2, > of the first non-volatile MEMORY device 1501 includes performing a foreground operation or a background operation on the MEMORY BLOCK MEMORY <0, 1, 2, > of the first non-volatile MEMORY device 1501.

In the processor 1341 of the first controller 1301, a unit (not shown) for performing the defective management on the first nonvolatile memory device 1501 may be included. The unit for performing the defective management on the first nonvolatile MEMORY device 1501 performs the bad BLOCK management, checks a bad BLOCK among a plurality of MEMORY BLOCKs MEMORY BLOCK <0, 1, 2, · included in the first nonvolatile MEMORY device 1501, and treats the checked bad BLOCK as a defective BLOCK. Bad block management means that, when the first nonvolatile memory device 1501 is a flash memory such as a NAND flash memory, since a program failure may occur due to the characteristics of the NAND flash memory at the time of writing data, for example, at the time of programming data, a memory block in which the program failure has occurred is treated as a defective and the program-failed data is written (i.e., programmed) into a new memory block.

The first controller 1301 performs an operation of transferring commands and data to be input/output between the first memory system 110 and the second memory system 120 through the processor 1341. Commands and data that may be input/output between the first memory system 110 and the second memory system 120 may be transferred from the host 102 to the first memory system 110.

The first nonvolatile memory device 1501 in the first system 110 can retain stored data even if power is not supplied. In particular, the first nonvolatile memory device 1501 in the first system 110 may store write data WDATA supplied from the host 102 through a write operation, and may supply read data (not shown) stored in the first nonvolatile memory device 1501 to the host 102 through a read operation.

Although the first nonvolatile memory device 1501 may be implemented by a nonvolatile memory such as a flash memory, for example, a NAND flash memory, it is noted that the first nonvolatile memory device 1501 may be implemented by any of various memories such as: phase change memory (PCRAM: phase change random access memory), resistive random access memory (RRAM (ReRAM): resistive random access memory), ferroelectric memory (FRAM: ferroelectric random access memory), and/or spin transfer torque magnetic memory (STT-RAM (STT-MRAM): spin transfer torque magnetic random access memory).

The first non-volatile MEMORY device 1501 includes a plurality of MEMORY BLOCKs MEMORY BLOCK <0, 1, 2, >. In other words, the first nonvolatile MEMORY device 1501 may store write data WDATA supplied from the host 102 in the MEMORY BLOCK MEMORY <0, 1, 2, ·> through a write operation, and may supply read data (not shown) stored in the MEMORY BLOCK MEMORY <0, 1, 2, ·> through a read operation to the host 102.

Each of the MEMORY BLOCKs MEMORY BLOCK <0, 1, 2, > included in the first nonvolatile MEMORY device 1501 includes a plurality of pages P <0, 1, 2, 3, 4, > each. Also, although not shown in detail in the drawings, each of pages P <0, 1, 2, 3, 4.

Each of the MEMORY BLOCKs MEMORY BLOCK <0, 1, 2, ·> included in the first nonvolatile MEMORY device 1501 may be a single-level cell (SLC) MEMORY BLOCK or a multi-level cell (MLC) MEMORY BLOCK, depending on the number of bits that may be stored or represented in one MEMORY cell included in the MEMORY BLOCK. The SLC memory block includes a plurality of pages implemented by memory cells each storing 1 bit of data, and has excellent data calculation performance and high endurance. MLC memory blocks include multiple pages implemented by memory cells that each store multiple bits of data (e.g., 2 or more bits of data), and may be more highly integrated than SLC memory blocks because MLC memory blocks have greater data storage space than SLC memory blocks.

MLC memory blocks exist in different layers. Thus, in a more specific sense, an MLC memory block may refer to a memory block that includes multiple pages implemented by memory cells that are each capable of storing 2 bits of data. Then, different names are assigned to the higher level MLC blocks to more accurately represent their functionality. For example, a triple-level cell (TLC) memory block includes a plurality of pages implemented by memory cells each capable of storing 3-bit data, a quad-level cell (QLC) memory block includes a plurality of pages implemented by memory cells each capable of storing 4-bit data, or a multi-level cell memory block includes a plurality of pages implemented by memory cells each capable of storing 5-bit or more data.

Referring to fig. 1C, a detailed configuration of the second memory system 120 is shown.

The second memory system 120 includes: a memory device, i.e., second non-volatile memory device 1502, stores data to be accessed by host 102; and a second controller 1302 that controls data storage in the second nonvolatile memory device 1502. The second nonvolatile memory device 1502 may be configured as the second storage area 1202 included in the second memory system 120 described above with reference to fig. 1A.

The second controller 1302 may control the second non-volatile memory device 1502 in response to a request from the host 102 transmitted through the first memory system 110. For example, the second controller 1302 may provide data read from the second non-volatile memory device 1502 to the host 102 through the first memory system 110, and may store data provided from the host 102 transmitted through the first memory system 110 into the second non-volatile memory device 1502. To this end, the second controller 1302 may control operations of the second nonvolatile memory device 1502, such as a read operation, a write operation, a program operation, and an erase operation.

In detail, the second controller 1302 included in the second memory system 120 may include: a third INTERFACE (INTERFACE3)1322, a PROCESSOR (PROCESSOR)1342, an Error Correction Code (ECC) component (hereinafter abbreviated as ECC)1382, a Power Management Unit (PMU)1402, a MEMORY INTERFACE (MEMORY INTERFACE)1422, and a MEMORY (MEMORY) 1442.

Looking at the detailed configuration of the second controller 1302 shown in fig. 1C, it can be seen that it is almost the same as the detailed configuration of the first controller 1301 shown in fig. 1B. That is, the third interface 1322 in the second controller 1302 may be configured to be the same as the first interface 1321 in the first controller 1301. The processor 1342 in the second controller 1302 may be configured the same as the processor 1341 in the first controller 1301. ECC 1382 in second controller 1302 may be configured the same as ECC 1381 in first controller 1301. The PMU 1402 in the second controller 1302 may be configured the same as the PMU 1401 in the first controller 1301. The memory interface 1422 in the second controller 1302 may be configured the same as the memory interface 1421 in the first controller 1301. The memory 1442 in the second controller 1302 may be configured to be the same as the memory 1441 in the first controller 1301.

One difference may be that the first interface 1321 in the first controller 1301 is an interface for commands and data transferred between the host 102 and the first memory system 110, but the third interface 1322 in the second controller 1302 is an interface for commands and data transferred between the first memory system 110 and the second memory system 120. Another difference may be that the second controller 1302 does not include any components corresponding to the second interface 131 in the first controller 1301.

Other operations of the first controller 1301 and the second controller 1302 are the same except for the above differences; therefore, a detailed description of the operation thereof is omitted herein.

FIG. 2 illustrates setup operations and command processing operations of data processing system 100 according to an embodiment of the present disclosure.

Referring to FIG. 2, the operations of data processing system 100 may include a set operation in an initial operation period (INIT) and a command processing operation in a NORMAL operation period (NORMAL).

In detail, when the initial operation period (INIT) STARTs (START) (S1), the first memory system 110 may set the mapping unit of the internal mapping information to a first size unit. That is, the mapping unit of the information indicating the mapping relationship between the physical address of the first storage area 1102 included in the first memory system 110 and the logical address used in the host 102 may be set to the first size unit.

In a state where an initial operation period (INIT) STARTs (START), the first memory system 110 may REQUEST capacity information (CAPA _ INFO) of the second memory region 1202 included in the second memory system 120 (REQUEST CAPA _ INFO). The second memory system 120 may transmit the capacity information (CAPA _ INFO) of the second memory area 1202 included in the second memory system 120 to the first memory system 110 as a response (ACK) to the REQUEST (REQUEST CAPA _ INFO) of the first memory system 110(ACK CAPA _ INFO). Thereafter, the second memory system 120 may SET the mapping unit of the internal mapping information (i.e., the mapping unit of the information indicating the mapping relationship between the physical address and the logical address of the second memory region 1202) to the second size unit in response to the mapping SET command MAP _ SET _ CMD transmitted from the first memory system 110.

The first memory system 110 may check the capacity information (CAPA _ INFO) of the second storage area 1202 received from the second memory system 120 in a state of an initial operation period (INIT) START (START), and may set a value of a second size unit, which is different from the value of the first size unit, according to the check result. That is, according to the result of checking the capacity information (CAPA _ INFO) of the second storage region 1202 received from the second memory system 120, the first memory system 110 may generate a mapping setting command MAP _ SET _ CMD for setting the mapping unit of the internal mapping information to be managed in the second memory system 120 to a second size unit, and may transmit the generated mapping setting command MAP _ SET _ CMD to the second memory system 120.

The first storage area 1102 in the first memory system 110 and the second storage area 1202 in the second memory system 120 as described above with reference to fig. 1A to 1C may be storage spaces including nonvolatile memory cells. The nonvolatile memory cell has a characteristic that a physical space cannot be rewritten. Accordingly, in order to store data requested to be written by the host 102 in the first storage region 1102 and the second storage region 1202 including the nonvolatile memory cells, the first and second memory systems 110 and 120 may perform mapping of a file system used by the host 102 to a storage space including the nonvolatile memory cells through a Flash Translation Layer (FTL). For example, an address of data according to a file system used by the host 102 may be referred to as a logical address or a logical block address, and addresses of storage spaces for storing data in the first and second storage areas 1102 and 1202 including nonvolatile memory cells may be referred to as a physical address or a physical block address. Accordingly, the first and second memory systems 110 and 120 can generate and manage mapping information indicating a mapping relationship between logical addresses corresponding to logical sectors of a file system used in the host 102 and physical addresses corresponding to physical spaces of the first storage area 1102 and the second storage area 1202. According to an embodiment, when the host 102 transmits a logical address to the first memory system 110 or the second memory system 120 together with a write command and data, the first memory system 110 or the second memory system 120 may search the first storage area 1102 or the second storage area 1202 for a storage space for storing data, may map a physical address of the storage space identified in the search to the logical address, and may program data to the identified storage space. According to an embodiment, when the host 102 transmits a logical address to the first memory system 110 or the second memory system 120 together with a read command, the first memory system 110 or the second memory system 120 may search for a physical address mapped to the logical address, may read data stored in the physical address found in the search from the first storage area 1102 or the second storage area 1202, and may output the read data to the host 102.

In the first and second storage areas 1102 and 1202 including the nonvolatile memory cells, the size unit may be 512 bytes or any one of 1K, 2K, and 4K bytes (i.e., the size of a page). The size unit refers to a unit of physical space for writing and reading data. The size of the page may vary depending on the type of memory device. Although the mapping unit of the internal mapping information may be managed to correspond to the size of the page, the mapping unit of the internal mapping information may also be managed to correspond to a unit larger than the size of the page. For example, although a 4K byte unit may be managed as a mapping unit of the internal mapping information, a 512K byte unit or a 1M byte unit may be managed as a mapping unit of the internal mapping information. In summary, the first memory system 110 sets the mapping unit of the internal mapping information to the first size unit and the second memory system 120 sets the mapping unit of the internal mapping information to the second size unit, which means that the mapping units of the internal mapping information managed in the first memory system 110 and the second memory system 120 may be different from each other.

In more detail, according to an embodiment, as a result of checking the capacity information (CAPA _ INFO) of the second storage region 1202 received from the second memory system 120 in a state of Starting (START) the initial operation period (INIT), when the size of the second storage region 1202 is greater than or equal to the size of the first storage region 1102, the first memory system 110 may generate a mapping setting command MAP _ SET _ CMD for setting the second size unit to be greater than the first size unit, and may transmit the generated mapping setting command MAP _ SET _ CMD to the second memory system 120. For example, the first memory system 110 may generate a mapping setting command MAP _ SET _ CMD for setting the second size unit to be greater than 512 kbyte units that are 4 kbyte units of the first size unit, and may transmit the generated mapping setting command MAP _ SET _ CMD to the second memory system 120.

According to an embodiment, as a result of checking the capacity information (CAPA _ INFO) of the second storage region 1202 received from the second memory system 120 in a state of Starting (START) the initial operation period (INIT), when the size of the second storage region 1202 is smaller than the size of the first storage region 1102, the first memory system 110 may generate a mapping setting command MAP _ SET _ CMD for setting the second size unit smaller than the first size unit, and may transmit the generated mapping setting command MAP _ SET _ CMD to the second memory system 120. For example, the first memory system 110 may generate a mapping setting command MAP _ SET _ CMD for setting the second size unit to a 4 kbyte unit smaller than a 16 kbyte unit, which is the first size unit, and may transmit the generated mapping setting command MAP _ SET _ CMD to the second memory system 120.

If the operation of setting the mapping unit of the internal mapping information in the first memory system 110 as the first size unit and the mapping unit of the internal mapping information in the second memory system 120 as the second size unit as described above is completed, the initial operation period (INIT) may END (END). After the initial operation period (INIT) is Ended (END) in this way, a NORMAL operation period (NORMAL) may be Started (START) (S3).

For reference, the above-described initial operation period (INIT) may be entered (START)/Exited (END) during a process of starting the first and second memory systems 110 and 120. Also, the above-described initial operation period (INIT) may be entered (START)/Exited (END) at an arbitrary point of time according to a request of the host 102.

In a state where a NORMAL operation period (NORMAL) STARTs (START), the host 102 may generate an arbitrary command and transmit the generated command to the first memory system 110. That is, the first memory system 110 may receive an arbitrary input command IN _ CMD IN a state of a NORMAL operation period (NORMAL) START (START). Any input command IN CMD may include all commands that may be generated by the host 102 to control the memory systems 110 and 120, such as write commands, read commands, and erase commands.

IN this case, the first memory system 110 may analyze the input command IN _ CMD transmitted from the host 102, and may select a processing location of the input command IN _ CMD according to the analysis result (S4). IN other words, the first memory system 110 may analyze the input command IN _ CMD transmitted from the host 102 IN a state of a NORMAL operation period (NORMAL) START (START), and select whether to process the input command IN _ CMD by itself IN the first memory system 110 or to process the input command IN _ CMD IN the second memory system 120 according to the analysis result (S4).

The result of operation S4 may indicate: the first memory system 110 processes the input command IN _ CMD received from the host 102 by itself (S7), or the first memory system 110 transfers the input command IN _ CMD received from the host 102 to the second memory system 120, and the second memory system 120 processes the input command IN _ CMD (S5).

First, when the first memory system 110 transfers the input command IN _ CMD transferred from the host 102 to the second memory system 120 and the second memory system 120 processes the input command IN _ CMD, the operations may be performed IN the following order (S5).

The first memory system 110 may transmit an input command IN _ CMD transmitted from the host 102 to the second memory system 120.

The second memory system 120 may perform a command operation IN response to an input command IN _ CMD transmitted through the first memory system 110. For example, when the input command IN _ CMD is a write command, the second memory system 120 may store write data input together with the input command IN _ CMD into the second memory area 1202. As another example, when the input command IN _ CMD is a read command, the second memory system 120 may read data stored IN the second memory region 1202.

The second memory system 120 may transmit a RESULT (RESULT) of processing the input command IN _ CMD to the first memory system 110(ACK IN _ CMD RESULT). For example, when the input command IN _ CMD is a write command, the second memory system 120 may transmit a response (ACK) to the first memory system 110 notifying that write data input together with the input command IN _ CMD has been normally stored into the second memory region 1202. As another example, when the input command IN _ CMD is a read command, the second memory system 120 may transfer read data read from the second memory region 1202 to the first memory system 110.

The first memory system 110 may transmit a RESULT (RESULT) of the process input command IN _ CMD received from the second memory system 120 to the host 102(ACK IN _ CMD RESULT).

Next, when the first memory system 110 processes the input command IN _ CMD transferred from the host 102 by itself, the operations may be performed IN the following order (S7).

The first memory system 110 may perform a command operation IN response to the input command IN _ CMD. For example, when the input command IN _ CMD is a write command, the first memory system 110 may store write data input together with the input command IN _ CMD into the first memory area 1102. As another example, when the input command IN _ CMD is a read command, the first memory system 110 may read data stored IN the first memory region 1102.

The first memory system 110 may transmit a RESULT (RESULT) of processing the input command IN _ CMD to the host 102(ACK IN _ CMD RESULT). For example, when the input command IN _ CMD is a write command, the first memory system 110 may transmit a response signal to the host 102 notifying that write data input together with the input command IN _ CMD has been normally stored IN the first storage region 1102. As another example, when the input command IN _ CMD is a read command, the first memory system 110 may transmit read data read from the first memory region 1102 to the host 102.

FIG. 3 illustrates an example of command processing operations of data processing system 100 according to an embodiment of the present disclosure.

Referring to FIG. 3, a write command processing operation of the command processing operations of data processing system 100 is described in detail. That is, a case where the input command IN _ CMD IN the command processing operation described above with reference to fig. 2 is the WRITE command WRITE _ CMD is described IN detail.

IN detail, when the input command IN _ CMD is the WRITE command WRITE _ CMD, the WRITE DATA WRITE _ DATA may be transferred from the host 102 to the first memory system 110 together with the WRITE command WRITE _ CMD.

After the WRITE DATA WRITE _ DATA is transferred from the host 102 to the first memory system 110 together with the WRITE command WRITE _ CMD in this manner, operation S4 may be started. That is, after the WRITE DATA WRITE _ DATA is transferred together with the WRITE command WRITE _ CMD from the host 102 to the first memory system 110, the first memory system 110 may analyze the WRITE command WRITE _ CMD, and may START an operation of selecting a processing location of the WRITE command WRITE _ CMD according to the analysis result (S4 START).

According to an embodiment, the operation of analyzing the WRITE command WRITE _ CMD in the first memory system 110 may include an operation of checking a pattern of WRITE DATA WRITE _ DATA corresponding to the WRITE command WRITE _ CMD. For example, the first memory system 110 may compare the size of the WRITE DATA WRITE _ DATA with a reference size, and may recognize a pattern of the WRITE DATA WRITE _ DATA smaller than the reference size as a random pattern and recognize a pattern of the WRITE DATA WRITE _ DATA larger than the reference size as a sequential pattern.

The operations described with reference to fig. 3 may be based on the following assumptions: the second storage area 1202 included in the second memory system 120 is larger than the first storage area 1102 included in the first memory system 110. In other words, the operation described with reference to fig. 3 may be based on the following assumptions: the second size unit mapping the physical addresses of the second storage area 1202 to logical addresses is larger than the first size unit mapping the physical addresses of the first storage area 1102 to logical addresses. Accordingly, in fig. 3, in order to store the WRITE DATA WRITE _ DATA identified as the sequential pattern having the size larger than the reference size into the second storage area 1202, the first memory system 110 may transfer the WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as the sequential pattern to the second memory system 120 to allow the second memory system 120 to process the WRITE command WRITE _ CMD. Also, in fig. 3, in order to store the WRITE DATA WRITE _ DATA identified as the random pattern having the size smaller than the reference size into the first storage area 1102, the first memory system 110 may process the WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as the random pattern by itself.

In more detail, as a result of checking the mode of the WRITE DATA WRITE _ DATA, when the WRITE DATA WRITE _ DATA is the sequential mode, the first memory system 110 may perform operation S5, i.e., transfer the WRITE command WRITE _ CMD to the second memory system 120 to allow the second memory system 120 to process the operation of the WRITE command WRITE _ CMD.

In detail, the first memory system 110 may transmit the WRITE command WRITE _ CMD and the WRITE DATA WRITE _ DATA transferred from the host 102 to the second memory system 120 as they are.

The second memory system 120 may store WRITE DATA WRITE _ DATA into the second storage area 1202 in response to a WRITE command WRITE _ CMD transferred through the first memory system 110.

Subsequently, the second memory system 120 may transmit a response (ACK) notifying whether the WRITE DATA WRITE _ DATA has been normally stored into the second storage region 1202 to the first memory system 110(ACK WRITE RESULT).

The first memory system 110 may transmit a RESULT (RESULT) of processing the WRITE command WRITE _ CMD received from the second memory system 120 to the host 102(ACK WRITE RESULT).

Further, as a result of checking the mode of the WRITE DATA WRITE _ DATA, when the WRITE DATA WRITE _ DATA is the random mode, the first memory system 110 may perform operation S7, i.e., an operation of processing the WRITE command WRITE _ CMD by itself.

In detail, the first memory system 110 may store WRITE DATA WRITE _ DATA into the first storage region 1102 in response to a WRITE command WRITE _ CMD transferred from the host 102.

Subsequently, the first memory system 110 may transmit a response (ACK) to the host 102(ACK WRITE RESULT) notifying whether the WRITE DATA WRITE _ DATA has been normally stored into the first storage region 1102.

According to the result of checking the pattern of the WRITE DATA WRITE _ DATA corresponding to the WRITE command WRITE _ CMD in the first memory system 110 as described above, the first memory system 110 may process the WRITE command WRITE _ CMD by itself or transfer the WRITE command WRITE _ CMD to the second memory system 120 to allow the second memory system 120 to process the WRITE command WRITE _ CMD, and then the operation S4 may END (S4 END).

In fig. 3, it is assumed that the second storage area 1202 included in the second memory system 120 is larger than the first storage area 1102 included in the first memory system 110. When the second storage area 1202 in the second memory system 120 is smaller than the first storage area 1102 in the first memory system 110 (different from that shown in fig. 3), i.e., when the second size unit for mapping the physical address of the second storage area 1202 to the logical address is smaller than the first size unit for mapping the physical address of the first storage area 1102 to the logical address, the operation may be performed in reverse to that shown in fig. 3. That is, the first memory system 110 may operate in such a manner as follows: in order to store the WRITE DATA WRITE _ DATA identified as a sequential pattern having a size larger than the reference size into the first storage region 1102, the first memory system 110 processes the WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as a sequential pattern by itself, and in order to store the WRITE DATA WRITE _ DATA identified as a random pattern having a size smaller than the reference size into the second storage region 1202, the first memory system 110 transfers the WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as a random pattern to the second memory system 120 to allow the second memory system 120 to process the WRITE command WRITE _ CMD.

FIG. 4 illustrates another example of a command processing operation of data processing system 100 according to an embodiment of the present disclosure.

Referring to FIG. 4, a read command processing operation of the command processing operations of data processing system 100 is described in detail. That is, a case where the input command IN _ CMD is the READ command READ _ CMD IN the command processing operation described above with reference to fig. 2 is described IN detail.

IN detail, when the input command IN _ CMD is a READ command READ _ CMD, the logical address READ _ LBA may be transferred from the host 102 to the first memory system 110 together with the READ command READ _ CMD.

After the logical address READ _ LBA is transferred from the host 102 to the first memory system 110 together with the READ command READ _ CMD in this manner, operation S4 may be started. That is, after the logical address READ _ LBA is transferred from the host 102 to the first memory system 110 together with the READ command READ _ CMD, the first memory system 110 may analyze the READ command READ _ CMD and may START an operation of selecting a processing position of the READ command READ _ CMD according to the analysis result (S4 START).

According to an embodiment, the operation of analyzing the READ command READ _ CMD in the first memory system 110 may include an operation of checking a value of a logical address READ _ LBA corresponding to the READ command READ _ CMD. For example, the first memory system 110 may check whether the value of the logical address READ _ LBA corresponding to the READ command READ _ CMD is a logical address managed in the internal mapping information included in the first memory system 110 or a logical address managed in the internal mapping information included in the second memory system 120.

In more detail, as a result of checking the value of the logical address READ _ LBA corresponding to the READ command READ _ CMD, when the value of the logical address READ _ LBA is a logical address managed in the internal mapping information included in the second memory system 120, the first memory system 110 may perform operation S5 of transferring the READ command READ _ CMD to the second memory system 120 to allow the second memory system 120 to process the operation of the READ command READ _ CMD.

In detail, the first memory system 110 may transmit the READ command READ _ CMD and the logical address READ _ LBA transmitted from the host 102 to the second memory system 120 as they are.

The second memory system 120 may READ DATA READ _ DATA in the second memory region 1202 in response to a READ command READ _ CMD transmitted through the first memory system 110. In other words, the second memory system 120 may search for a physical address (not shown) mapped to the logical address READ _ LBA corresponding to the READ command READ _ CMD, and may READ the READ DATA READ _ DATA in the second storage area 1202 by referring to the physical address found in the search.

Subsequently, the second memory system 120 may transmit the READ DATA READ _ DATA READ in the second storage area 1202 to the first memory system 110(ACK READ _ DATA).

The first memory system 110 may transmit READ DATA READ _ DATA received from the second memory system 120 to the host 102(ACK READ _ DATA).

Further, as a result of checking the value of the logical address READ _ LBA corresponding to the READ command READ _ CMD, when the value of the logical address READ _ LBA is a logical address managed in the internal mapping information included in the first memory system 110, the first memory system 110 may perform operation S7, that is, an operation of processing the READ command READ _ CMD by itself in the first memory system 110.

In detail, the first memory system 110 may READ DATA READ _ DATA in the first storage area 1102 in response to a READ command READ _ CMD transmitted from the host 102. In other words, the first memory system 110 may search for a physical address (not shown) mapped to a logical address READ _ LBA corresponding to the READ command READ _ CMD, and may READ DATA READ _ DATA in the first storage area 1102 by referring to the physical address found in the search.

The first memory system 110 may transmit READ DATA READ _ DATA READ in the first storage area 1102 to the host 102(ACK READ _ DATA).

According to the result of checking the value of the logical address READ _ LBA corresponding to the READ command READ _ CMD in the first memory system 110 as described above, the first memory system 110 may process the READ command READ _ CMD by itself or transfer the READ command READ _ CMD to the second memory system 120 to allow the second memory system 120 to process the READ command READ _ CMD, and then the operation S4 may END (S4 END).

FIG. 5 illustrates operations in data processing system 100 to manage at least a plurality of memory systems based on logical addresses in accordance with an embodiment of the present disclosure.

Referring to fig. 5, components included in the data processing system 100, i.e., the host 102, the first memory system 110, and the second memory system 120 manage logical addresses.

In detail, the first memory system 110 may generate and manage the internal mapping tables LBA1/PBA1 in which the first physical address PBA1 and the first logical address LBA1 corresponding to the first storage region 1102 are mapped to each other.

The second memory system 120 may generate and manage the internal mapping tables LBA2/PBA2 that map the second physical address PBA2 and the second logical address LBA2 corresponding to the second storage region 1202 to each other.

The host 102 may use the sum logical address ALL _ LBA obtained by summing the first logical address LBA1 and the second logical address LBA 2.

The size of the storage area in each of the first memory system 110 and the second memory system 120, in which data can be stored, may be determined in advance during the manufacture of each of the first memory system 110 and the second memory system 120. For example, it may be determined in advance during the manufacture of the first and second memory systems 110 and 120 that the size of the first storage area 1102 included in the first memory system 110 is 512 gbytes and the size of the second storage area 1202 included in the second memory system 120 is 1T bytes. In order for the host 102 to normally read data from or write data to the first and second storage regions 1102 and 1202 included in the first and second memory systems 110 and 120, respectively, the size of each of the first and second storage regions 1102 and 1202 should be shared with the host 102. That is, the host 102 needs to know the size of each of the first storage area 1102 and the second storage area 1202 included in the first memory system 110 and the second memory system 120, respectively. When the host 102 knows the size of each of the first and second storage regions 1102 and 1202 included in the first and second memory systems 110 and 120, the host 102 knows the range of the sum logical address ALL _ LBA obtained by summing the range of the first logical address LBA1 corresponding to the first storage region 1102 and the range of the second logical address LBA2 corresponding to the second storage region 1202.

As described above with reference to fig. 2, in the initial operation period (INIT), the first memory system 110 may set not only the mapping unit of the internal mapping information of the first memory system 110 to the first size unit but also the mapping unit of the internal mapping information of the second memory system 120 to the second size unit. By so doing, the first memory system 110 can set not only the range of the first logical address LBA1 corresponding to the first storage region 1102 included in the first memory system 110 but also the range of the second logical address LBA2 corresponding to the second storage region 1202 included in the second memory system 120. In other words, in the initial operation period (INIT), the first memory system 110 may set a range of the first logical address LBA1 corresponding to the first storage region 1102 and a range of the second logical address LBA2 corresponding to the second storage region 1202, the ranges of the first and second logical addresses being different from each other. When the range of the second logical address LBA2 and the range of the first logical address LBA1 are different from each other, the range of the second logical address LBA2 and the range of the first logical address LBA1 do not overlap each other and are continuous. After setting the range of the second logical address LBA2 corresponding to the second storage region 1202 in the initial operation period (INIT), the first memory system 110 may share the range of the second logical address LBA2 with the second memory system 120. Further, after setting the range of the first logical address LBA1 corresponding to the first storage region 1102 and the range of the second logical address LBA2 corresponding to the second storage region 1202, which are different from each other, in the initial operation period (INIT), the first memory system 110 may share the range of the sum logical address ALL _ LBA, which is obtained by summing the range of the first logical address LBA1 and the range of the second logical address LBA2, with the host 102.

For reference, as shown in the drawing, after a Flash Translation Layer (FTL)401, which may be logically included in a first controller 1301 included in the first memory system 110, sets a range of the first logical address LBA1 and a range of the second logical address LBA2, the first memory system 110 may generate and manage an internal mapping table LBA1/PBA1 in which the first physical address PBA1 and the first logical address LBA1 corresponding to the first storage region 1102 are mapped to each other. Likewise, as shown in the drawing, in the second memory system 120, a Flash Translation Layer (FTL)402, which may be logically included in the second controller 1302 included in the second memory system 120, may generate and manage an internal mapping table LBA2/PBA2 in which a second logical address LBA2 whose range is set by the first memory system 110 and a second physical address PBA2 corresponding to the second memory region 1202 are mapped to each other in the internal mapping table LBA2/PBA 2.

Fig. 6A and 6B illustrate examples of logical address based command processing operations in data processing system 100, according to embodiments of the present disclosure.

Referring to fig. 6A and 6B, a write command processing operation of the command processing operation based on the logical address in the data processing system 100 is described in detail. That is, in addition to the operation of processing the WRITE command WRITE _ CMD described above with reference to fig. 2 and 3, the operation of processing the WRITE command WRITE _ CMD based on the logical address is described in detail.

Referring to fig. 6A, the second storage area 1202 included in the second memory system 120 is larger than the first storage area 1102 included in the first memory system 110. In other words, the operations to be described with reference to fig. 6A are described in the following context: the second size unit, which is the mapping unit of the internal mapping information indicating the mapping relationship between the physical address and the logical address of the second storage area 1202, is larger than the first size unit, which is the mapping unit of the internal mapping information indicating the mapping relationship between the physical address and the logical address of the first storage area 1102 (yes in S601). Accordingly, in fig. 6A, in order to store the WRITE DATA WRITE _ DATA identified as the sequential pattern having the size larger than the reference size into the second storage region 1202, the first memory system 110 may transfer the WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as the sequential pattern to the second memory system 120 to allow the second memory system 120 to process the WRITE command WRITE _ CMD. Also, in fig. 6A, in order to store the WRITE DATA WRITE _ DATA identified as the random pattern having the size smaller than the reference size into the first storage area 1102, the first memory system 110 may process the WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as the random pattern by itself.

In detail, in a state where the second size unit is set to be larger than the first size unit (yes in S601), the first memory system 110 may check what pattern the WRITE DATA WRITE _ DATA input from the host 102 together with the WRITE command WRITE _ CMD has (S602).

As a result of the operation of S602, when the WRITE DATA WRITE _ DATA input from the host 102 together with the WRITE command WRITE _ CMD is in the sequential mode ("sequential" in S602), the first memory system 110 may check the value of the first input logical address input together with the WRITE command WRITE _ CMD (S603).

As a result of the operation of S603, when the value of the first input logical address input together with the WRITE command WRITE _ CMD is included in the range of the second logical address LBA2 managed in the second memory system 120 ("included in the LBA 2" in S603), the first memory system 110 may transfer the WRITE command WRITE _ CMD, the first input logical address, and the WRITE DATA WRITE _ DATA input from the host 102 to the second memory system 120 so that the WRITE DATA WRITE _ DATA may be stored into the second storage region 1202 included in the second memory system 120 (S605). In response to the WRITE command WRITE _ CMD, the second memory system 120 may map a specific physical address indicating a specific physical area capable of storing DATA in the second storage area 1202 to the first input logical address, and then may store the WRITE DATA WRITE _ DATA transferred from the first memory system 110 into the specific physical area.

As a result of the operation of S603, when the value of the first input logical address input together with the WRITE command WRITE _ CMD is included in the range of the first logical address LBA1 managed in the first memory system 110 ("included in LBA 1" in S603), the first memory system 110 may manage a first intermediate logical address included in the range of the second logical address LBA2 managed in the second memory system 120 as intermediate mapping information by mapping the first intermediate logical address to the first input logical address input from the host 102 (S606). The first memory system 110 may transfer the first intermediate logical address, the value of which is determined through operation S606, to the second memory system 120 together with the WRITE command WRITE _ CMD and the WRITE DATA WRITE _ DATA, so that the WRITE DATA WRITE _ DATA may be stored into the second storage region 1202 included in the second memory system 120 (S608). In response to the WRITE command WRITE _ CMD, the second memory system 120 may map a specific physical address indicating a specific physical area capable of storing DATA in the second storage area 1202 to the first intermediate logical address, and then may store the WRITE DATA WRITE _ DATA transferred from the first memory system 110 into the specific physical area.

As a result of the operation of S602, when the WRITE DATA WRITE _ DATA input from the host 102 together with the WRITE command WRITE _ CMD is in a random mode ("random" in S602), the first memory system 110 may check the value of the first input logical address input together with the WRITE command WRITE _ CMD (S604).

As a result of the operation of S604, when the value of the first input logical address input together with the WRITE command WRITE _ CMD is included in the range of the second logical address LBA2 managed in the second memory system 120 ("included in LBA 2" in S604), the first memory system 110 may manage a second intermediate logical address included in the range of the first logical address LBA1 managed in the first memory system 110 as intermediate mapping information by mapping the second intermediate logical address to the first input logical address input from the host 102 (S607). The first memory system 110 may store WRITE DATA WRITE _ DATA into the first storage area 1102 in response to the WRITE command WRITE _ CMD and the second intermediate logical address of which the value is determined through operation S607 (S609). In response to the WRITE command WRITE _ CMD, the first memory system 110 may map a specific physical address indicating a specific physical area capable of storing DATA in the first storage area 1102 to the second intermediate logical address, and may then store the WRITE DATA WRITE _ DATA transferred from the host 102 into the specific physical area.

As a result of the operation of S604, when the value of the first input logical address input together with the WRITE command WRITE _ CMD is included in the range of the first logical address LBA1 managed in the first memory system 110 ("included in the LBA 1" in S604), the first memory system 110 may store the WRITE DATA WRITE _ DATA into the first storage region 1102 in response to the WRITE command WRITE _ CMD and the first input logical address input from the host 102 (S610). In response to the WRITE command WRITE _ CMD, the first memory system 110 may map a specific physical address indicating a specific physical area capable of storing DATA in the first storage area 1102 to the first input logical address, and may then store the WRITE DATA WRITE _ DATA transferred from the host 102 into the specific physical area.

Briefly, even if the first input logical address input together with the WRITE command WRITE _ CMD is within the range of the second logical address LBA2 managed in the second memory system 120, when the WRITE DATA WRITE _ DATA is stored in the first storage region 1102 according to the pattern of the WRITE DATA WRITE _ DATA and the comparison of the first size unit and the second size unit, the first memory system 110 may generate and manage intermediate mapping information to map the first input logical address within the range of the second logical address LBA2 to the second intermediate logical address within the range of the first logical address LBA1 managed in the first memory system 110.

Also, even if the first input logical address input together with the WRITE command WRITE _ CMD is within the range of the first logical address LBA1 managed in the first memory system 110, when the WRITE DATA WRITE _ DATA is stored in the second storage area 1202 in the second memory system 120 according to the pattern of the WRITE DATA WRITE _ DATA and the comparison of the first size unit and the second size unit, the first memory system 110 may generate and manage intermediate mapping information to map the first input logical address within the range of the first logical address LBA1 to the first intermediate logical address within the range of the second logical address LBA2 managed in the second memory system 120.

Referring to fig. 6B, the second storage area 1202 included in the second memory system 120 is smaller than the first storage area 1102 included in the first memory system 110. In other words, the operations to be described with reference to fig. 6B are described in the following context: the second size unit as the mapping unit of the internal mapping information indicating the mapping relationship between the physical address and the logical address of the second storage area 1202 is smaller than the first size unit as the mapping unit of the internal mapping information indicating the mapping relationship between the physical address and the logical address of the first storage area 1102 (no in S601). Accordingly, in fig. 6B, in order to store the WRITE DATA WRITE _ DATA identified as the sequential pattern having the size larger than the reference size into the first storage area 1102, the first memory system 110 may process the WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as the sequential pattern by itself. Also, in fig. 6B, in order to store the WRITE DATA WRITE _ DATA identified as the random pattern having the size smaller than the reference size into the second storage area 1202, the first memory system 110 may transfer a WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as the random pattern to the second memory system 120 to allow the second memory system 120 to process the WRITE command WRITE _ CMD.

In detail, in a state where the second size unit is set to be smaller than the first size unit (no in S601), the first memory system 110 may check what mode the WRITE DATA WRITE _ DATA input from the host 102 together with the WRITE command WRITE _ CMD has (S612).

As a result of the operation of S612, when the WRITE DATA WRITE _ DATA input from the host 102 together with the WRITE command WRITE _ CMD is a random pattern ("random" in S612), the first memory system 110 may check the value of the first input logical address input together with the WRITE command WRITE _ CMD (S613).

As a result of the operation of S613, when the value of the first input logical address input together with the WRITE command WRITE _ CMD is included in the range of the second logical address LBA2 managed in the second memory system 120 ("included in LBA 2" in S613), the first memory system 110 may transfer the WRITE command WRITE _ CMD, the first input logical address, and the WRITE DATA WRITE _ DATA input from the host 102 to the second memory system 120 so that the WRITE DATA WRITE _ DATA may be stored into the second storage area 1202 included in the second memory system 120 (S615). In response to the WRITE command WRITE _ CMD, the second memory system 120 may map a specific physical address indicating a specific physical area capable of storing DATA in the second storage area 1202 to the first input logical address, and then may store the WRITE DATA WRITE _ DATA transferred from the first memory system 110 into the specific physical area.

As a result of the operation of S613, when the value of the first input logical address input together with the WRITE command WRITE _ CMD is included in the range of the first logical address LBA1 managed in the first memory system 110 ("included in LBA 1" in S613), the first memory system 110 may manage a fourth intermediate logical address included in the range of the second logical address LBA2 managed in the second memory system 120 as intermediate mapping information by mapping the fourth intermediate logical address to the first input logical address input from the host 102 (S616). The first memory system 110 may transfer the fourth intermediate logical address, the value of which is determined through operation S616, to the second memory system 120 together with the WRITE command WRITE _ CMD and the WRITE DATA WRITE _ DATA, so that the WRITE DATA WRITE _ DATA may be stored into the second storage region 1202 included in the second memory system 120 (S618). In response to the WRITE command WRITE _ CMD, the second memory system 120 may map a specific physical address indicating a specific physical area capable of storing DATA in the second storage area 1202 to the fourth intermediate logical address, and then may store the WRITE DATA WRITE _ DATA transferred from the first memory system 110 into the specific physical area.

As a result of the operation of S612, when the WRITE DATA WRITE _ DATA input from the host 102 together with the WRITE command WRITE _ CMD is in the sequential mode ("sequential" in S612), the first memory system 110 may check the value of the first input logical address input together with the WRITE command WRITE _ CMD (S614).

As a result of the operation of S614, when the value of the first input logical address input together with the WRITE command WRITE _ CMD is included in the range of the second logical address LBA2 managed in the second memory system 120 ("included in LBA 2" in S614), the first memory system 110 may manage a third intermediate logical address included in the range of the first logical address LBA1 managed in the first memory system 110 as intermediate mapping information by mapping the third intermediate logical address to the first input logical address input from the host 102 (S617). The first memory system 110 may store WRITE DATA WRITE _ DATA into the first storage area 1102 in response to the WRITE command WRITE _ CMD and the third intermediate logical address of which the value is determined through operation S617 (S619). In response to the WRITE command WRITE _ CMD, the first memory system 110 may map a specific physical address indicating a specific physical area capable of storing DATA in the first storage area 1102 to the third intermediate logical address, and may then store the WRITE DATA WRITE _ DATA transferred from the host 102 into the specific physical area.

As a result of the operation of S614, when the value of the first input logical address input together with the WRITE command WRITE _ CMD is included in the range of the first logical address LBA1 managed in the first memory system 110 ("included in LBA 1" in S614), the first memory system 110 may store the WRITE DATA WRITE _ DATA into the first storage region 1102 in response to the WRITE command WRITE _ CMD input from the host 102 and the first input logical address (S620). In response to the WRITE command WRITE _ CMD, the first memory system 110 may map a specific physical address indicating a specific physical area capable of storing DATA in the first storage area 1102 to the first input logical address, and may then store the WRITE DATA WRITE _ DATA transferred from the host 102 into the specific physical area.

Briefly, even if the first input logical address input together with the WRITE command WRITE _ CMD is within the range of the second logical address LBA2 managed in the second memory system 120, when the WRITE DATA WRITE _ DATA is stored in the first storage region 1102 according to the pattern of the WRITE DATA WRITE _ DATA and the comparison of the first size unit and the second size unit, the first memory system 110 may generate and manage intermediate mapping information to map the first input logical address within the range of the second logical address LBA2 to the third intermediate logical address within the range of the first logical address LBA1 managed in the first memory system 110.

Also, even if the first input logical address input together with the WRITE command WRITE _ CMD is within the range of the first logical address LBA1 managed in the first memory system 110, when the WRITE DATA WRITE _ DATA is stored in the second storage area 1202 in the second memory system 120 according to the pattern of the WRITE DATA WRITE _ DATA and the comparison of the first size unit and the second size unit, the first memory system 110 may generate and manage intermediate mapping information to map the first input logical address within the range of the first logical address LBA1 to the fourth intermediate logical address within the range of the second logical address LBA2 managed in the second memory system 120.

FIG. 7 illustrates another example of a logical address based command processing operation in data processing system 100, according to an embodiment of the present disclosure.

Referring to FIG. 7, a read command processing operation of the logical address based command processing operation in data processing system 100 is described in detail. That is, in addition to the operation of processing the READ command READ _ CMD described above with reference to fig. 2 and 4, the operation of processing the READ command READ _ CMD based on the logical address is described in detail.

In detail, the first memory system 110 may check whether a second input logical address input from the host 102 together with the READ command READ _ CMD is detected in the intermediate mapping information (S700).

As described above with reference to fig. 6A and 6B, even if the first input logical address input together with the WRITE command WRITE _ CMD is included in the range of the second logical address LBA2 managed in the second memory system 120, when the WRITE DATA WRITE _ DATA is stored in the first storage region 1102 according to the pattern of the WRITE DATA WRITE _ DATA and the comparison result of the first size unit and the second size unit, the first memory system 110 may generate and manage intermediate mapping information to map the first input logical address included in the range of the second logical address LBA2 to an intermediate logical address included in the range of the first logical address LBA1 managed in the first memory system 110.

Also, even if the first input logical address input together with the WRITE command WRITE _ CMD is included in the range of the first logical address LBA1 managed in the first memory system 110, when the WRITE DATA WRITE _ DATA is stored in the second storage region 1202 included in the second memory system 120 according to the pattern of the WRITE DATA WRITE _ DATA and the comparison result of the first size unit and the second size unit, the first memory system 110 may generate and manage intermediate mapping information to map the first input logical address included in the range of the first logical address LBA1 to an intermediate logical address included in the range of the second logical address LBA2 managed in the second memory system 120.

When a logical address mapped to a second input logical address input from the host 102 together with the READ command READ _ CMD is not detected in the intermediate mapping information (no in S700), a READ operation may be performed based on the second input logical address.

In contrast, when the fifth intermediate logical address mapped to the second input logical address input from the host 102 together with the READ command READ _ CMD is detected in the intermediate mapping information (the "fifth intermediate logical address detected" in S700), the READ operation may be performed based on the fifth intermediate logical address detected in the intermediate mapping information.

As a result of the operation of S700, when a logical address mapped to a second input logical address input from the host 102 together with the READ command READ _ CMD is not detected in the intermediate mapping information (no in S700), the first memory system 110 may check the value of the second input logical address (S701).

As a result of the operation of S701, when the value of the second input logical address input together with the READ command READ _ CMD is included in the range of the second logical address LBA2 managed in the second memory system 120 ("included in LBA 2" in S701), the first memory system 110 may transmit the READ command READ _ CMD and the second input logical address to the second memory system 120, thereby reading the READ DATA READ _ DATA in the second storage region 1202 (S702). The second memory system 120 may search for a specific physical address mapped to the second input logical address in the internal mapping information in response to the READ command READ _ CMD, and may READ the READ DATA READ _ DATA in a specific physical area indicated by the specific physical address in the second storage area 1202.

As a result of the operation of S701, when the value of the second input logical address input together with the READ command READ _ CMD is included in the range of the first logical address LBA1 managed in the first memory system 110 ("included in LBA 1" in S701), the first memory system 110 may READ DATA READ _ DATA in the first storage region 1102 in response to the READ command READ _ CMD and the second input logical address (S703). The first memory system 110 may search for a specific physical address mapped to the second input logical address in the internal mapping information in response to the READ command READ _ CMD, and may READ DATA READ _ DATA in a specific physical area indicated by the specific physical address in the first storage area 1102.

As a result of the operation of S700, when a fifth intermediate logical address mapped to the second input logical address input from the host 102 together with the READ command READ _ CMD is detected in the intermediate mapping information ("fifth intermediate logical address detected" in S700), the first memory system 110 may check a value of the fifth intermediate logical address (S704).

As a result of the operation of S704, when the value of the fifth intermediate logical address detected in the intermediate mapping information is included in the range of the second logical address LBA2 managed in the second memory system 120 ("included in the LBA 2" in S704), the first memory system 110 may transmit the READ command READ _ CMD and the fifth intermediate logical address to the second memory system 120, thereby reading the READ DATA READ _ DATA in the second storage area 1202 (S705). The second memory system 120 may search for a specific physical address mapped to the fifth intermediate logical address in the internal mapping information in response to the READ command READ _ CMD, and may READ the READ DATA READ _ DATA in a specific physical area indicated by the specific physical address in the second storage area 1202.

As a result of the operation of S704, when the value of the fifth intermediate logical address detected in the intermediate mapping information is included in the range of the first logical address LBA1 managed in the first memory system 110 ("included in LBA 1" in S704), the first memory system 110 may READ DATA READ _ DATA in the first storage area 1102 in response to the READ command READ _ CMD and the fifth intermediate logical address (S706). The first memory system 110 may search for a specific physical address mapped to the fifth intermediate logical address in the internal mapping information in response to the READ command READ _ CMD, and may READ DATA READ _ DATA in a specific physical area indicated by the specific physical address in the first storage area 1102.

As is apparent from the above description, according to the first embodiment of the present disclosure, when the first and second memory systems 110 and 120 physically separated from each other are coupled to the host 102, the host 102 may use the physically separated first and second memory systems 110 and 120 logically similar to one memory system because the sum logical address ALL _ LBA obtained by summing the first logical address LBA1 corresponding to the first memory system 110 and the second logical address LBA2 corresponding to the second memory system 120 is shared with the host 102.

In addition, when the physically separated first and second memory systems 110 and 120 are coupled to the host 102, roles of the respective first and second memory systems 110 and 120 may be determined according to a coupling relationship with the host 102, and according to the determined roles, a size unit and a pattern of data stored in the respective first and second memory systems 110 and 120 may be differently determined. For example, random patterns of data having a relatively smaller size may be stored in a first memory system 110 coupled directly to the host 102, and sequential data having a relatively larger size may be stored in a second memory system 120 coupled to the host 102 through the first memory system 110. As shown, by doing so, not only can the physically separated first and second memory systems 110 and 120 be used logically similar to one memory system, but also data stored in the respective first and second memory systems 110 and 120 can be efficiently processed.

Second embodiment

Fig. 8A to 8D illustrate a data processing system including a plurality of memory systems according to another embodiment of the present disclosure.

Referring to fig. 8A, a data processing system 100 according to another embodiment of the present disclosure may include a host 102 and a plurality of memory systems 190, 110, and 120.

According to an embodiment, the plurality of memory systems 190, 110, and 120 may include three memory systems, i.e., a main memory system 190, a first memory system 110, and a second memory system 120.

The host 102 may transmit a plurality of commands corresponding to the user request to the plurality of memory systems 190, 110, and 120, and thus, the plurality of memory systems 190, 110, and 120 may perform a plurality of command operations corresponding to the plurality of commands, i.e., operations corresponding to the user request.

The plurality of memory systems 190, 110, and 120 may operate in response to a request by the host 102, and in particular, may store data to be accessed by the host 102. In other words, any of the plurality of memory systems 190, 110, and 120 may serve as a primary or secondary memory device for the host 102. Each of the plurality of memory systems 190, 110, and 120 may be implemented as any of various types of memory devices, depending on the host interface protocol coupled to the host 102. For example, each of the memory systems 190, 110, and 120 may be implemented by a Solid State Drive (SSD), a multimedia card in the form of an MMC, an eMMC (embedded MMC), an RS-MMC (reduced-size MMC), and a micro MMC, a secure digital card in the form of an SD, a mini SD, and a micro SD, a Universal Serial Bus (USB) storage device, a universal flash memory (UFS) device, a Compact Flash (CF) card, a smart media card, and/or a memory stick.

Each of the memory systems 190, 110, and 120 may be integrated into one semiconductor device to configure a memory card such as the following: personal Computer Memory Card International Association (PCMCIA) cards, Compact Flash (CF) cards, smart media cards in the form of SM and SMC, memory sticks, multimedia cards in the form of MMC, RS-MMC and micro MMC, secure digital cards in the form of SD, mini SD, micro SD and SDHC, and/or Universal Flash (UFS) devices.

For another example, each of memory systems 190, 110, and 120 may be configured to: a computer, an ultra mobile pc (umpc), a workstation, a netbook, a Personal Digital Assistant (PDA), a portable computer, a network tablet, a wireless phone, a mobile phone, a smart phone, an electronic book, a Portable Multimedia Player (PMP), a portable game machine, a navigation device, a black box, a digital camera, a Digital Multimedia Broadcasting (DMB) player, a three-dimensional television, a smart television, a digital audio recorder, a digital audio player, a digital picture recorder, a digital picture player, a digital video recorder, a digital video player, a storage device configuring a data center, a device capable of transmitting and receiving information under a wireless environment, one of various electronic devices configuring a home network, one of various electronic devices configuring a computer network, one of various electronic devices configuring a telematics network, a computer, a, RFID (radio frequency identification) devices, or configuring one of various constituent elements of a computing system.

The MAIN MEMORY system 190 may include a MAIN MEMORY area (MAIN MEMORY SPACE) 1902. The first MEMORY system 110 may include a first MEMORY region (MEMORY SPACE1) 1102. The second MEMORY system 120 may include a second MEMORY region (MEMORY SPACE2) 1202. Each of the main storage area 1902 included in the main memory system 190, the first storage area 1102 included in the first memory system 110, and the second storage area 1202 included in the second memory system 120 may include a storage device such as: volatile memory devices such as Dynamic Random Access Memory (DRAM) and Static Random Access Memory (SRAM), or non-volatile memory devices such as Read Only Memory (ROM), mask ROM (mrom), programmable ROM (prom), erasable programmable ROM (eprom), electrically erasable programmable ROM (eeprom), ferroelectric ram (fram), phase change ram (pram), magnetic ram mram (mram), resistive ram (rram), and flash memory.

The main memory system 190 may be coupled directly to the host 102. That is, the main memory system 190 may receive write data and store the write data in the main memory area 1902 according to a request of the host 102. Also, the main memory system 190 may read data stored in the main memory area 1902 according to a request of the host 102, and output the read data to the host 102.

The first memory system 110 may be directly coupled to the main memory system 190, but may not be directly coupled to the host 102. That is, when commands and data are transferred between the first memory system 110 and the host 102, the commands and data may be transferred through the main memory system 190. For example, when receiving write data according to a request of the host 102, the first memory system 110 may receive the write data through the main memory system 190. Of course, the first memory system 110 may store the write data of the host 102 received through the main memory system 190 in the first storage area 1102 included in the first memory system 110. Similarly, when read data stored in the first storage area 1102 included in the first memory system 110 is read and output to the host 102 according to a request of the host 102, the first memory system 110 may output the read data to the host 102 through the main memory system 190.

The second memory system 120 may be directly coupled to the main memory system 190, but may not be directly coupled to the host 102. That is, when commands and data are transferred between the second memory system 120 and the host 102, the commands and data may be transferred through the main memory system 190. For example, when receiving write data according to a request of the host 102, the second memory system 120 may receive the write data through the main memory system 190. Of course, the second memory system 120 may store the write data of the host 102 received through the main memory system 190 in the second storage area 1202 included in the second memory system 120. Similarly, when read data stored in the second storage area 1202 included in the second memory system 120 is read and output to the host 102 according to a request of the host 102, the second memory system 120 can output the read data to the host 102 through the main memory system 190.

Referring to FIG. 8B, a detailed configuration of the main memory system 190 is shown.

The main memory system 190 includes: a memory device, i.e., a primary non-volatile memory device 1503, which stores data to be accessed by the host 102; and a main controller 1303 which controls data storage into the main nonvolatile memory device 1503. The main non-volatile memory device 1503 may be configured as the main memory area 1902 included in the main memory system 190 described above with reference to fig. 8A.

The main controller 1303 controls the main nonvolatile memory device 1503 in response to a request from the host 102. For example, the main controller 1303 supplies data read from the main nonvolatile memory device 1503 to the host 102, and stores data supplied from the host 102 in the main nonvolatile memory device 1503. To this end, the main controller 1303 controls operations of the main nonvolatile memory device 1503, such as a read operation, a write operation, a program operation, and an erase operation.

In detail, the main controller 1303 included in the main memory system 190 may include: a first INTERFACE (INTERFACE1)1323, a PROCESSOR (PROCESSOR)1343, an Error Correction Code (ECC) component (hereinafter abbreviated as ECC)1383, a Power Management Unit (PMU)1403, a MEMORY INTERFACE (MEMORY INTERFACE)1423, a MEMORY (MEMORY)1443, a second INTERFACE (INTERFACE2)133, and a third INTERFACE (INTERFACE3) 135.

The first interface 1323 performs operations to exchange commands and data to be transferred between the main memory system 190 and the host 102, and may be configured to communicate with the host 102 through at least one of various interface protocols such as: universal Serial Bus (USB), multi-media card (MMC), peripheral component interconnect express (PCI-E), serial SCSI (sas), Serial Advanced Technology Attachment (SATA), Parallel Advanced Technology Attachment (PATA), Small Computer System Interface (SCSI), Enhanced Small Disk Interface (ESDI), electronic Integrated Drive (IDE), and/or MIPI (mobile industrial processor interface). As an area for exchanging data with the host 102, the first interface 1323 may be driven by firmware called a Host Interface Layer (HIL).

ECC 1383 may correct erroneous bits of data processed in main non-volatile memory device 1503 and may include an ECC encoder and an ECC decoder. The ECC encoder may perform error correction encoding on data to be programmed to the main nonvolatile memory device 1503, and may thereby generate data to which parity bits are added. The data to which the parity bits are added may be stored in the main nonvolatile memory device 1503. When reading data stored in the main nonvolatile memory device 1503, the ECC decoder detects and corrects errors included in the data read from the main nonvolatile memory device 1503. In other words, after performing error correction decoding on data read from the main nonvolatile memory device 1503, the ECC 1383 may determine whether the error correction decoding has succeeded, may output an indication signal such as an error correction success/failure signal according to the determination result, and may correct an error bit of the read data by using a parity bit generated in the ECC encoding process. If the number of error bits that have occurred is equal to or greater than the error bit correction limit, ECC 1383 may not correct the error bits and may output an error correction failure signal indicating that the error bits cannot be corrected.

ECC 1383 may perform error correction by using an LDPC (low density parity check) code, a BCH (Bose-Chaudhuri-Hocquenghem) code, a turbo code, a Reed-Solomon (Reed-Solomon) code, a convolutional code, an RSC (recursive systematic code), or a coded modulation such as TCM (trellis coded modulation) or BCM (block coded modulation). However, error correction is not limited to these techniques. To this end, ECC 1383 may include suitable hardware and software for error correction.

The PMU 1401 provides and manages power of the main controller 1303, that is, power of components included in the main controller 1303.

The memory interface 1423 serves as a memory/storage device interface that performs an interface connection between the main controller 1303 and the main nonvolatile memory device 1503 to allow the main controller 1303 to control the main nonvolatile memory device 1503 in response to a request from the host 102. When the main nonvolatile memory device 1503 is a flash memory, particularly when the main nonvolatile memory device 1503 is a NAND flash memory, as a NAND Flash Controller (NFC), the memory interface 1423 generates a control signal of the main nonvolatile memory device 1503 and processes data under the control of the processor 1343.

The memory interface 1423 may process commands and data between the main controller 1303 and the main nonvolatile memory device 1503, and may support, for example, the operation of a NAND flash memory interface, in particular, data input/output between the main controller 1303 and the main nonvolatile memory device 1503. The memory interface 1423, which is a region for exchanging data with the main nonvolatile memory device 1503, may be driven by firmware called a Flash Interface Layer (FIL).

The second interface 133 may be an interface that processes commands and data between the main controller 1303 and the first memory system 110, i.e., a system interface that performs an interface connection between the main memory system 190 and the first memory system 110. The second interface 133 may transfer commands and data between the main memory system 190 and the first memory system 110 under the control of the processor 1343.

The third interface 135 may be an interface that processes commands and data between the main controller 1303 and the second memory system 120, i.e., a system interface that performs an interface connection between the main memory system 190 and the second memory system 120. The third interface 135 may transfer commands and data between the main memory system 190 and the second memory system 120 under the control of the processor 1343.

The memory 1443, which is a working memory of the main memory system 190 and the main controller 1303, stores data for driving the main memory system 190 and the main controller 1303. In detail, when the main controller 1303 controls the main nonvolatile memory device 1503 in response to a request from the host 102, for example, when the main controller 1303 controls operations such as a read operation, a write operation, a program operation, and an erase operation of the main nonvolatile memory device 1503, the memory 1443 temporarily stores data that should be managed. Further, the memory 1443 may temporarily store data that should be managed when commands and data are transferred between the main controller 1303 and the first memory system 110. Further, the memory 1443 may temporarily store data that should be managed when commands and data are transferred between the main controller 1303 and the second memory system 120.

Memory 1443 may be implemented by volatile memory. For example, memory 1443 may be implemented by Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).

The memory 1443 may be provided inside the main controller 1303 as shown in fig. 8B, or may be provided outside the main controller 1303. When the memory 1443 is provided outside the main controller 1303, the memory 1443 should be implemented by a separate external volatile memory operatively coupled to exchange data with the main controller 1303 through a separate memory interface (not shown).

The memory 1443 may store data that should be managed in a process of controlling the operation of the main nonvolatile memory device 1503, a process of transferring data between the main memory system 190 and the first memory system 110, and a process of transferring data between the main memory system 190 and the second memory system 120. To store such data, memory 1443 may include program memory, data memory, write buffer/cache, read buffer/cache, data buffer/cache, map buffer/cache, and the like.

The processor 1343 controls all operations of the main memory system 190 and, in particular, controls a program operation or a read operation with respect to the main non-volatile memory device 1503 in response to a write request or a read request from the host 102. The processor 1343 drives firmware called a Flash Translation Layer (FTL) to control the general operation of the main memory system 190 with respect to the main non-volatile memory device 1503. The processor 1343 may be implemented by a microprocessor or a Central Processing Unit (CPU).

For example, the main controller 1303 executes an operation requested from the host 102 in the main nonvolatile memory device 1503, that is, executes a command operation corresponding to a command received from the host 102 by the processor 1343 using the main nonvolatile memory device 1503. The main controller 1303 may perform a foreground operation, which is a command operation corresponding to a command received from the host 102, such as a program operation corresponding to a write command, a read operation corresponding to a read command, an erase operation corresponding to an erase command, or a parameter setting operation corresponding to a set parameter command or a set feature command, which is a set command.

The main controller 1303 may perform a background operation with respect to the main nonvolatile memory device 1503 by the processor 1343. Background operations for the main non-volatile MEMORY device 1503 may include operations, such as Garbage Collection (GC) operations, to copy data stored in a MEMORY BLOCK among MEMORY BLOCKs MEMORY BLOCK <0, 1, 2,. > of the main non-volatile MEMORY device 1503 to another MEMORY BLOCK. Background operations for the main non-volatile MEMORY device 1503 may include operations, such as Wear Leveling (WL) operations, that exchange stored data between MEMORY BLOCKs MEMORY BLOCK <0, 1, 2, > of the main non-volatile MEMORY device 1503. Background operations for the main non-volatile MEMORY device 1503 may include operations to store mapping data stored in the main controller 1303 in a MEMORY BLOCK MEMORY <0, 1, 2,. department > of the main non-volatile MEMORY device 1503, such as a map flush operation. Background operations for the main non-volatile MEMORY device 1503 may include bad management operations for the main non-volatile MEMORY device 1503, for example, bad BLOCK management operations to check and process a bad BLOCK among a plurality of MEMORY BLOCKs MEMORY BLOCK <0, 1, 2.

The main controller 1303 may generate and manage log data corresponding to an operation of accessing the MEMORY BLOCK <0, 1, 2, > of the main nonvolatile MEMORY device 1503 by the processor 1343. An operation that accesses the MEMORY BLOCK MEMORY <0, 1, 2, > of the main non-volatile MEMORY device 1503 includes performing a foreground operation or a background operation on the MEMORY BLOCK MEMORY <0, 1, 2, > of the main non-volatile MEMORY device 1503.

In the processor 1343 of the main controller 1303, a unit (not shown) for performing defect management on the main nonvolatile memory device 1503 may be included. A unit for performing defective management on the main nonvolatile MEMORY device 1503 performs bad BLOCK management, checks a bad BLOCK among a plurality of MEMORY BLOCKs MEMORY BLOCK <0, 1, 2, · included in the main nonvolatile MEMORY device 1503, and treats the checked bad BLOCK as a defective BLOCK. Bad block management means that, when the main nonvolatile memory device 1503 is a flash memory such as a NAND flash memory, since a program failure may occur due to the characteristics of the NAND flash memory at the time of writing data, for example, at the time of programming data, a memory block in which the program failure has occurred is treated as a defective and the program-failed data is written (i.e., programmed) into a new memory block.

The main controller 1303 performs an operation of transferring commands and data to be input/output between the main memory system 190 and the second memory system 120 by a processor 1343 implemented by a microprocessor or a Central Processing Unit (CPU). Commands and data that may be input/output between the main memory system 190 and the second memory system 120 may be transferred from the host 102 to the main memory system 190.

The main nonvolatile memory device 1503 in the main memory system 190 can retain stored data even if power is not supplied. In particular, the host nonvolatile memory device 1503 in the main memory system 190 may store write data WDATA supplied from the host 102 by a write operation, and may supply read data (not shown) stored in the host nonvolatile memory device 1503 to the host 102 by a read operation.

Although the main nonvolatile memory device 1503 may be implemented by a nonvolatile memory such as a flash memory, for example, a NAND flash memory, it is noted that the main nonvolatile memory device 1503 may be implemented by any of various memories such as: phase change memory (PCRAM: phase change random access memory), resistive random access memory (RRAM (ReRAM): resistive random access memory), ferroelectric memory (FRAM: ferroelectric random access memory), and/or spin transfer torque magnetic memory (STT-RAM (STT-MRAM): spin transfer torque magnetic random access memory).

The main non-volatile MEMORY device 1503 includes a plurality of MEMORY BLOCKs MEMORY BLOCK <0, 1, 2, >. In other words, the host nonvolatile MEMORY device 1503 may store write data WDATA supplied from the host 102 in the MEMORY BLOCK MEMORY <0, 1, 2,. > by a write operation, and may supply read data (not shown) stored in the MEMORY BLOCK MEMORY <0, 1, 2, > by a read operation to the host 102.

Each of the MEMORY BLOCKs MEMORY BLOCK <0, 1, 2, > included in the main nonvolatile MEMORY device 1503 includes a plurality of pages P <0, 1, 2, 3, 4, > that. Also, although not shown in detail in the drawings, each of pages P <0, 1, 2, 3, 4.

Each of the MEMORY BLOCKs MEMORY BLOCK <0, 1, 2, ·> included in the main non-volatile MEMORY device 1503 may be a single-level cell (SLC) MEMORY BLOCK or a multi-level cell (MLC) MEMORY BLOCK, depending on the number of bits that may be stored or represented in one MEMORY cell included in the MEMORY BLOCK. The SLC memory block includes a plurality of pages implemented by memory cells each storing 1 bit of data, and has excellent data calculation performance and high endurance. MLC memory blocks include multiple pages implemented by memory cells that each store multiple bits of data (e.g., 2 or more bits), and may be more highly integrated than SLC memory blocks because MLC memory blocks have greater data storage space than SLC memory blocks.

As described above, there are different types of MLC memory blocks of different storage capacities.

Referring to fig. 8C, a detailed configuration of the first memory system 110 is shown.

The first memory system 110 includes: a memory device, i.e., a first nonvolatile memory device 1501, which stores data to be accessed by the host 102; and a first controller 1301 which controls storage of data in the first nonvolatile memory device 1501. The first non-volatile memory device 1501 may be configured as the first storage area 1102 in the first memory system 110 described above with reference to fig. 8A.

The first controller 1301 may control the first nonvolatile memory device 1501 in response to a request from the host 102 transferred through the main memory system 190. For example, the first controller 1301 may supply data read from the first nonvolatile memory device 1501 to the host 102 through the main memory system 190, and may store data supplied from the host 102 transferred through the main memory system 190 into the first nonvolatile memory device 1501. To this end, the first controller 1301 may control operations of the first nonvolatile memory device 1501, such as a read operation, a write operation, a program operation, and an erase operation.

In detail, the first controller 1301 included in the first memory system 110 may include: a fourth INTERFACE (INTERFACE4)1321, a PROCESSOR (PROCESSOR)1341, an Error Correction Code (ECC) component (hereinafter abbreviated as ECC)1381, a Power Management Unit (PMU)1401, a MEMORY INTERFACE (MEMORY INTERFACE)1421, and a MEMORY (MEMORY) 1441.

Looking at the detailed configuration of the first controller 1301 illustrated in fig. 8C, it can be seen that it is almost the same as the detailed configuration of the main controller 1303 illustrated in fig. 8B. That is, the fourth interface 1321 in the first controller 1301 may be configured to be the same as the first interface 1323 in the main controller 1303. The processor 1341 in the first controller 1301 may be configured to be the same as the processor 1343 in the main controller 1303. ECC 1381 in first controller 1301 may be configured the same as ECC 1383 in main controller 1303. The PMU 1401 in the first controller 1301 may be configured to be identical to the PMU 1403 in the main controller 1303. The memory interface 1421 in the first controller 1301 may be configured to be the same as the memory interface 1423 in the main controller 1303. The memory 1441 in the first controller 1301 may be configured to be the same as the memory 1443 in the main controller 1303.

One difference may be that the first interface 1323 in the main controller 1303 is an interface for commands and data transferred between the host 102 and the main memory system 190, but the fourth interface 1321 in the first controller 1301 is an interface for commands and data transferred between the main memory system 190 and the first memory system 110. Another difference may be that the first controller 1301 may not include any components corresponding to the second interface 133 and the third interface 135 in the main controller 1303.

Except for the above differences, the main controller 1303 and the first controller 1301 operate the same; therefore, a detailed description of the operation thereof is omitted herein.

Referring to fig. 8D, a detailed configuration of the second memory system 120 is shown.

The second memory system 120 includes: a memory device, i.e., second non-volatile memory device 1502, stores data to be accessed by host 102; and a second controller 1302 that controls data storage in the second nonvolatile memory device 1502. The second non-volatile memory device 1502 may be configured as the second storage area 1202 in the second memory system 120 described above with reference to fig. 8A.

The second controller 1302 may control the second non-volatile memory device 1502 in response to a request from the host 102 transmitted through the main memory system 190. For example, the second controller 1302 may provide data read from the second nonvolatile memory device 1502 to the host 102 through the main memory system 190, and may store data provided from the host 102 transferred through the main memory system 190 into the second nonvolatile memory device 1502. To this end, the second controller 1302 may control operations of the second nonvolatile memory device 1502, such as a read operation, a write operation, a program operation, and an erase operation.

In detail, the second controller 1302 included in the second memory system 120 may include: a fifth INTERFACE (INTERFACE5)1322, a PROCESSOR (PROCESSOR)1342, an Error Correction Code (ECC) component (hereinafter abbreviated as ECC)1382, a Power Management Unit (PMU)1402, a MEMORY INTERFACE (MEMORY INTERFACE)1422, and a MEMORY (MEMORY) 1442.

Looking at the detailed configuration of the second controller 1302 shown in fig. 8D, it can be seen that it is almost the same as the detailed configuration of the main controller 1303 shown in fig. 8B. That is, the fifth interface 1322 in the second controller 1302 may be configured to be the same as the first interface 1323 in the main controller 1303. The processor 1342 in the second controller 1302 may be configured the same as the processor 1343 in the main controller 1303. ECC 1382 in second controller 1302 may be configured the same as ECC 1383 in main controller 1303. The PMU 1402 in the second controller 1302 may be configured to be identical to the PMU 1403 in the main controller 1303. The memory interface 1422 in the second controller 1302 may be configured to be the same as the memory interface 1423 in the main controller 1303. The memory 1442 in the second controller 1302 may be configured to be the same as the memory 1443 in the main controller 1303.

One difference may be that the main interface 1323 in the main controller 1303 is an interface for commands and data transferred between the host 102 and the main memory system 190, but the fifth interface 1322 in the second controller 1302 is an interface for commands and data transferred between the main memory system 190 and the second memory system 120. Another difference may be that the second controller 1302 does not include any components corresponding to the second interface 133 and the third interface 135 in the main controller 1303.

Except for the above differences, the operations of the main controller 1303 and the second controller 1302 are the same; therefore, a detailed description of the operation thereof is omitted herein.

Fig. 9A and 9B illustrate setup operations and command processing operations of a data processing system according to an embodiment of the present disclosure.

Referring to fig. 9A and 9B, the operation of data processing system 100 includes a set operation in an initial operation period (INIT) and a command processing operation in a NORMAL operation period (NORMAL).

In detail, referring to fig. 9A, when the initial operation period (INIT) STARTs (START) (S1), the main memory system 190 may set the mapping unit of the internal mapping information as a reference size unit. That is, a mapping unit of information indicating a mapping relationship between a physical address of the main storage area 1902 included in the main memory system 190 and a logical address used in the host 102 may be set as a reference size unit.

In a state of initial operation period (INIT) START (START), the main memory system 190 may REQUEST first capacity information (CAPA _ INFO1) of the first storage region 1102 included in the first memory system 110 (REQUEST CAPA _ INFO 1). The first memory system 110 may transmit the first capacity information (CAPA _ INFO1) of the first storage region 1102 included in the first memory system 110 to the main memory system 190 as a response (ACK) to the REQUEST (REQUEST CAPA _ INFO1) of the main memory system 190(ACK CAPA _ INFO 1). Thereafter, the first memory system 110 may SET a mapping unit of the internal mapping information (i.e., a mapping unit of information mapping physical addresses and logical addresses of the first storage region 1102) to a first size unit in response to the first mapping SET command MAP _ SET _ CMD1 transferred from the main memory system 190.

The main memory system 190 may check the first capacity information (CAPA _ INFO1) of the first storage area 1102 received from the first memory system 110 in a state of an initial operation period (INIT) START (START), and may set a value of a first size unit, which is different from a value of a reference size unit, according to a check result. That is, according to the result of checking the first capacity information (CAPA _ INFO1) of the first storage region 1102 received from the first memory system 110, the main memory system 190 may generate a first mapping setting command MAP _ SET _ CMD1 for setting a mapping unit of internal mapping information to be managed in the first memory system 110 to a first size unit, and may transmit the generated first mapping setting command MAP _ SET _ CMD1 to the first memory system 110.

In a state of initial operation period (INIT) START (START), the main memory system 190 may REQUEST the second capacity information (CAPA _ INFO2) of the second memory region 1202 included in the second memory system 120 (REQUEST CAPA _ INFO 2). The second memory system 120 may transmit the second capacity information (CAPA _ INFO2) of the second storage area 1202 included in the second memory system 120 to the main memory system 190 as a response (ACK) to the REQUEST (REQUEST CAPA _ INFO2) of the main memory system 190(ACK CAPA _ INFO 2). Thereafter, in response to the second MAP setting command MAP _ SET _ CMD2 transferred from the main memory system 190, the second memory system 120 may SET a mapping unit of information that MAPs physical addresses of the second storage region 1202 to logical addresses, i.e., internal mapping information, to a second size unit.

The main memory system 190 may check the second capacity information (CAPA _ INFO2) of the second storage area 1202 received from the second memory system 120 in a state of an initial operation period (INIT) START (START), and may set a value of a second size unit, which is different from the value of the reference size unit and the value of the first size unit, according to a check result. That is, according to the result of checking the second capacity information (CAPA _ INFO2) of the second storage region 1202 received from the second memory system 120, the main memory system 190 may generate the second mapping setting command MAP _ SET _ CMD2 for setting the mapping unit of the internal mapping information to be managed in the second memory system 120 to a second size unit, and may transmit the generated second mapping setting command MAP _ SET _ CMD2 to the second memory system 120.

The main storage area 1902 included in the main memory system 190, the first storage area 1102 included in the first memory system 110, and the second storage area 1202 included in the second memory system 120 may be storage spaces including nonvolatile memory cells. The nonvolatile memory cell has a characteristic that a physical space cannot be rewritten. Accordingly, to store data requested to be written by the host 102 into the main storage area 1902, the first storage area 1102, and the second storage area 1202 including non-volatile memory cells, the main memory system 190, the first memory system 110, and the second memory system 120 may perform mapping of a file system used by the host 102 to a storage space including non-volatile memory cells through a Flash Translation Layer (FTL). For example, an address of data according to a file system used by the host 102 may be referred to as a logical address or a logical block address, and addresses of storage spaces for storing data in the main storage area 1902 including the nonvolatile memory unit, the first storage area 1102, and the second storage area 1202 may be referred to as a physical address or a physical block address. Accordingly, the main memory system 190, the first memory system 110, and the second memory system 120 may generate and manage mapping information indicating a mapping relationship between logical addresses corresponding to logical sectors of a file system used in the host 102 and physical addresses corresponding to physical spaces of the main storage area 1902, the first storage area 1102, and the second storage area 1202. According to an embodiment, when the host 102 transfers a logical address to the main memory system 190, the first memory system 110, or the second memory system 120 together with a write command and data, the main memory system 190, the first memory system 110, or the second memory system 120 may search the main memory area 1902, the first memory area 1102, or the second memory area 1202 for a memory space for storing data, may map a physical address of the memory space found in the search to the logical address, and may program the data to the memory space. According to an embodiment, when the host 102 transfers a logical address to the main memory system 190, the first memory system 110, or the second memory system 120 together with a read command, the main memory system 190, the first memory system 110, or the second memory system 120 may search for a physical address mapped to the logical address, may read data stored in the physical address found in the search from the main memory area 1902, the first memory area 1102, or the second memory area 1202, and may output the read data to the host 102.

In the main storage area 1902 including the nonvolatile memory cells, the first storage area 1102, and the second storage area 1202, the size unit may be 512 bytes or any one of 1K, 2K, and 4K bytes (i.e., the size of a page). The size unit is a unit of physical space for writing and reading data. The size of the page may vary depending on the type of memory device. Although the mapping unit of the internal mapping information may be managed to correspond to the size of the page, the mapping unit of the internal mapping information may also be managed to correspond to a unit larger than the size of the page. For example, although a 4K byte unit may be managed as a mapping unit of the internal mapping information, a 512K byte unit or a 1M byte unit may be managed as a mapping unit of the internal mapping information. In summary, when the main memory system 190 sets the mapping unit of the internal mapping information as the reference size unit, the first memory system 110 sets the mapping unit of the internal mapping information as the first size unit and the second memory system 120 sets the mapping unit of the internal mapping information as the second size unit, which means that the mapping units of the internal mapping information managed in the main memory system 190, the first and second memory systems 110 and 120 may be different from each other.

In more detail, according to an embodiment, in a state of an initial operation period (INIT) START (START), the main memory system 190 may compare first capacity information (CAPA _ INFO1) of the first storage region 1102 received from the first memory system 110 with second capacity information (CAPA _ INFO2) of the second storage region 1202 received from the second memory system 120.

As a result of the comparison, when the first storage region 1102 is greater than the second storage region 1202, the main memory system 190 may generate the first and second MAP setting commands MAP _ SET _ CMD1 and MAP _ SET _ CMD2 for setting the first size unit to be greater than the second size unit, and may transmit the generated first and second MAP setting commands MAP _ SET _ CMD1 and MAP _ SET _ CMD2 to the first and second memory systems 110 and 120. For example, the main memory system 190 may generate a first MAP SET command MAP _ SET _ CMD1 for setting a first size unit to 512 kbytes and transmit the generated first MAP SET command MAP _ SET _ CMD1 to the first memory system 110, and may generate a second MAP SET command MAP _ SET _ CMD2 for setting a second size unit to 16 kbytes that is smaller than the first size unit and transmit the generated second MAP SET command MAP _ SET _ CMD2 to the second memory system 120.

As a result of the comparison, when the first storage region 1102 is smaller than the second storage region 1202, the main memory system 190 may generate the first and second MAP setting commands MAP _ SET _ CMD1 and MAP _ SET _ CMD2 for setting the first size unit to be smaller than the second size unit, and may transmit the generated first and second MAP setting commands MAP _ SET _ CMD1 and MAP _ SET _ CMD2 to the first and second memory systems 110 and 120. For example, the main memory system 190 may generate a first MAP SET command MAP _ SET _ CMD1 for setting a first size unit to 4 kbytes and transmit the generated first MAP SET command MAP _ SET _ CMD1 to the first memory system 110, and may generate a second MAP SET command MAP _ SET _ CMD2 for setting a second size unit to 256 kbytes greater than the first size unit and transmit the generated second MAP SET command MAP _ SET _ CMD2 to the second memory system 120.

As a result of the comparison, when the sizes of the first and second storage regions 1102 and 1202 are identical to each other, the main memory system 190 may generate first and second MAP setting commands MAP _ SET _ CMD1 and MAP _ SET _ CMD2 for setting any one of the first size unit and the second size unit to be larger and the other one of the first size unit and the second size unit to be smaller, and may transmit the generated first and second MAP setting commands MAP _ SET _ CMD1 and MAP _ SET _ CMD2 to the first and second memory systems 110 and 120.

If the operation of setting the mapping unit of the internal mapping information in the main memory system 190 as the reference size unit, setting the mapping unit of the internal mapping information in the first memory system 110 as the first size unit, and setting the mapping unit of the internal mapping information in the second memory system 120 as the second size unit as described above is completed, the initial operation period (INIT) may END (END). After the initial operation period (INIT) is Ended (END) in this way, a NORMAL operation period (NORMAL) may be Started (START) (S3).

For reference, the above-described initial operation period (INIT) may be entered (START)/Exited (END) during a process of starting the main memory system 190, the first memory system 110, and the second memory system 120. Also, the above-described initial operation period (INIT) may be entered (START)/Exited (END) at an arbitrary point of time according to a request of the host 102.

Referring to fig. 9B, in a state where a NORMAL operation period (NORMAL) STARTs (START), the host 102 may generate an arbitrary command and transfer the generated command to the main memory system 190. That is, the main memory system 190 may receive an arbitrary input command IN _ CMD IN a state of a NORMAL operation period (NORMAL) START (START). Any input command IN CMD may include all commands that may be generated by the host 102 to control the memory systems 190, 110, and 120, such as write commands, read commands, and erase commands.

IN this case, the main memory system 190 may analyze the input command IN _ CMD transferred from the host 102, and may select a processing location of the input command IN _ CMD according to the analysis result (S4). IN other words, the main memory system 190 may analyze the input command IN _ CMD transferred from the host 102 IN a state of a NORMAL operation period (NORMAL) START (START), and select whether to process the input command IN _ CMD by itself IN the main memory system 190 or to process the input command IN _ CMD IN any one of the first and second memory systems 110 and 120 according to the analysis result (S4).

The result of the operation S4 may instruct the main memory system 190 to autonomously process the input command IN _ CMD received from the host 102 (S9), the main memory system 190 to transfer the input command IN _ CMD received from the host 102 to the first memory system 110 and the first memory system 110 to process the input command IN _ CMD (S5), or the main memory system 190 to transfer the input command IN _ CMD received from the host 102 to the second memory system 120 and the second memory system 120 to process the input command IN _ CMD (S7).

First, the operation (S5) when the main memory system 190 transfers the input command IN _ CMD transferred from the host 102 to the first memory system 110 and the first memory system 110 processes the input command IN _ CMD may be performed IN the following order.

The main memory system 190 may transfer the input command IN _ CMD transferred from the host 102 to the first memory system 110.

The first memory system 110 may perform a command operation IN response to an input command IN _ CMD transferred through the main memory system 190. For example, when the input command IN _ CMD is a write command, the first memory system 110 may store write data input together with the input command IN _ CMD into the first memory area 1102. As another example, when the input command IN _ CMD is a read command, the first memory system 110 may read data stored IN the first memory region 1102.

The first memory system 110 may transmit a RESULT (RESULT) of processing the input command IN _ CMD to the main memory system 190(ACK IN _ CMD RESULT). For example, when the input command IN _ CMD is a write command, the first memory system 110 may transmit a response (ACK) to the main memory system 190 notifying that write data input together with the input command IN _ CMD has been normally stored IN the first memory area 1102. As another example, when the input command IN _ CMD is a read command, the first memory system 110 may transfer read data read from the first memory area 1102 to the main memory system 190.

The main memory system 190 may transmit the RESULT (RESULT) of the process input command IN _ CMD received from the first memory system 110 to the host 102(ACK IN _ CMD RESULT).

When the main memory system 190 transfers the input command IN _ CMD transferred from the host 102 to the second memory system 120 and the second memory system 120 processes the input command IN _ CMD, the operations may be performed IN the following order (S7).

The main memory system 190 may transfer the input command IN _ CMD transferred from the host 102 to the second memory system 120.

The second memory system 120 may perform a command operation IN response to an input command IN _ CMD transferred through the main memory system 190. For example, when the input command IN _ CMD is a write command, the second memory system 120 may store write data input together with the input command IN _ CMD into the second memory area 1202. As another example, when the input command IN _ CMD is a read command, the second memory system 120 may read data stored IN the second memory region 1202.

The second memory system 120 may transmit the RESULT (RESULT) of processing the input command IN _ CMD to the main memory system 190(ACK IN _ CMD RESULT). For example, when the input command IN _ CMD is a write command, the second memory system 120 may transmit a response (ACK) to the main memory system 190 notifying that write data input together with the input command IN _ CMD has been normally stored IN the second memory area 1202. As another example, when the input command IN _ CMD is a read command, the second memory system 120 may transfer read data read from the second memory region 1202 to the main memory system 190.

The main memory system 190 may transmit the RESULT (RESULT) of the process input command IN _ CMD received from the second memory system 120 to the host 102(ACK IN _ CMD RESULT).

When the main memory system 190 processes the input command IN _ CMD transferred from the host 102 by itself, the operations may be performed IN the following order (S9).

The main memory system 190 may perform a command operation IN response to an input command IN _ CMD. For example, when the input command IN _ CMD is a write command, the main memory system 190 may store write data input together with the input command IN _ CMD into the main memory area 1902. As another example, when the input command IN _ CMD is a read command, the main memory system 190 may read data stored IN the main memory area 1902.

The main memory system 190 may transmit the RESULT (RESULT) of processing the input command IN _ CMD to the host 102(ACK IN _ CMD RESULT). For example, when the input command IN _ CMD is a write command, the main memory system 190 may transmit a response signal to the host 102 notifying that write data input together with the input command IN _ CMD has been normally stored IN the main memory area 1902. As another example, when the input command IN _ CMD is a read command, the main memory system 190 may transmit read data read from the main memory area 1902 to the host 102.

FIG. 10 illustrates an example of command processing operations of data processing system 100 according to an embodiment of the present disclosure.

Referring to FIG. 10, a write command processing operation of the command processing operations of data processing system 100 is described in detail. That is, a case where the input command IN _ CMD IN the command processing operation described above with reference to fig. 9B is the WRITE command WRITE _ CMD is described IN detail.

IN detail, when the input command IN _ CMD is the WRITE command WRITE _ CMD, the WRITE DATA WRITE _ DATA may be transferred from the host 102 to the main memory system 190 together with the WRITE command WRITE _ CMD.

After the WRITE DATA WRITE _ DATA is transferred from the host 102 to the main memory system 190 together with the WRITE command WRITE _ CMD in this manner, operation S4 may be started. That is, after the WRITE DATA WRITE _ DATA is transferred from the host 102 to the main memory system 190 together with the WRITE command WRITE _ CMD, the main memory system 190 may analyze the WRITE command WRITE _ CMD, and may START an operation of selecting a processing location of the WRITE command WRITE _ CMD according to the analysis result (S4 START).

According to an embodiment, the operation of analyzing the WRITE command WRITE _ CMD in the main memory system 190 may include an operation of checking a pattern of WRITE DATA WRITE _ DATA corresponding to the WRITE command WRITE _ CMD. For example, the main memory system 190 may compare the size of the WRITE DATA WRITE _ DATA with a first reference size, and may recognize a pattern of the WRITE DATA WRITE _ DATA smaller than the first reference size as a random pattern and recognize a pattern of the WRITE DATA WRITE _ DATA larger than the first reference size as a sequential pattern. Also, the main memory system 190 may recognize WRITE DATA WRITE _ DATA smaller than the second reference size among the WRITE DATA WRITE _ DATA recognized as the sequential mode as the first sequential mode, and may recognize WRITE DATA WRITE _ DATA larger than the second reference size as the second sequential mode. The first reference size may be smaller than the second reference size.

The operations described with reference to fig. 10 may be based on the following assumptions: the second storage area 1202 included in the second memory system 120 is larger than the first storage area 1102 included in the first memory system 110. In other words, the operations described with reference to fig. 10 may be based on the following assumptions: the second size unit for mapping the physical address of the second storage area 1202 to a logical address is larger than the first size unit for mapping the physical address of the first storage area 1102 to a logical address. Therefore, in fig. 10, in order to store the WRITE DATA WRITE _ DATA identified as the second sequential pattern having a size larger than the second reference size into the second storage area 1202, the main memory system 190 may transfer the WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as the second sequential pattern to the second memory system 120 to allow the second memory system 120 to process the WRITE command WRITE _ CMD. Further, in fig. 10, in order to store the WRITE DATA WRITE _ DATA identified as the first sequential pattern having a size smaller than the second reference size and larger than the first reference size into the first storage area 1102, the main memory system 190 may transfer a WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as the first sequential pattern to the first memory system 110 to allow the first memory system 110 to process the WRITE command WRITE _ CMD. Further, in fig. 10, in order to store the WRITE DATA WRITE _ DATA identified as a random pattern having a size smaller than the first reference size into the main storage area 1902, the main memory system 190 may process a WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as a random pattern by itself.

In more detail, as a result of checking the mode of the WRITE DATA WRITE _ DATA, when the WRITE DATA WRITE _ DATA is the first sequential mode, the main memory system 190 may perform operation S5, i.e., transfer the WRITE command WRITE _ CMD to the first memory system 110 to allow the first memory system 110 to process the operation of the WRITE command WRITE _ CMD.

In detail, the main memory system 190 may transfer the WRITE command WRITE _ CMD and the WRITE DATA WRITE _ DATA transferred from the host 102 to the first memory system 110 as they are.

The first memory system 110 may store WRITE DATA WRITE _ DATA into the first storage area 1102 in response to a WRITE command WRITE _ CMD transferred through the main memory system 190.

Subsequently, the first memory system 110 may transmit a response (ACK) notifying whether the WRITE DATA WRITE _ DATA has been normally stored into the second storage region 1102 to the main memory system 190(ACK WRITE RESULT).

The main memory system 190 may transfer the RESULT (RESULT) of processing the WRITE command WRITE _ CMD received from the first memory system 110 to the host 102(ACK WRITE RESULT).

As a result of checking the mode of the WRITE DATA WRITE _ DATA, when the WRITE DATA WRITE _ DATA is the second sequential mode, the main memory system 190 may perform operation S7, i.e., transfer the WRITE command WRITE _ CMD to the second memory system 120 to allow the second memory system 120 to process the operation of the WRITE command WRITE _ CMD.

In detail, the main memory system 190 may transfer the WRITE command WRITE _ CMD and the WRITE DATA WRITE _ DATA transferred from the host 102 to the second memory system 120 as they are.

The second memory system 120 may store WRITE DATA WRITE _ DATA into the second storage area 1202 in response to a WRITE command WRITE _ CMD transferred through the main memory system 190.

Subsequently, the second memory system 120 may transmit a response (ACK) notifying whether the WRITE DATA WRITE _ DATA has been normally stored into the second storage area 1202 to the main memory system 190(ACK WRITE RESULT).

The main memory system 190 may transfer the RESULT (RESULT) of processing the WRITE command WRITE _ CMD received from the second memory system 120 to the host 102(ACK WRITE RESULT).

As a result of checking the mode of the WRITE DATA WRITE _ DATA, when the WRITE DATA WRITE _ DATA is the random mode, the main memory system 190 may perform operation S9, i.e., an operation of processing the WRITE command WRITE _ CMD by itself.

In detail, the main memory system 190 may store WRITE DATA WRITE _ DATA into the main storage area 1902 in response to a WRITE command WRITE _ CMD transferred from the host 102.

Subsequently, the main memory system 190 may transmit a response (ACK) to the host 102(ACK WRITE RESULT) notifying whether the WRITE DATA WRITE _ DATA has been normally stored in the main storage area 1902.

As a result of checking the pattern of the WRITE DATA WRITE _ DATA corresponding to the WRITE command WRITE _ CMD in the main memory system 190 as described above, the main memory system 190 may process the WRITE command WRITE _ CMD by itself or transfer the WRITE command WRITE _ CMD to any one of the first and second memory systems 110 and 120 to allow the any one of the first and second memory systems 110 and 120 to process the WRITE command WRITE _ CMD, and then the operation S4 may END (S4 END).

In fig. 10, it is assumed that the second storage area 1202 included in the second memory system 120 is larger than the first storage area 1102 included in the first memory system 110. When the second storage area 1202 in the second memory system 120 is smaller than the first storage area 1102 in the first memory system 110 (different from that shown in fig. 10), that is, when the second size unit for mapping the physical address of the second storage area 1202 to the logical address is smaller than the first size unit for mapping the physical address of the first storage area 1102 to the logical address, the operation may be performed in reverse to that shown in fig. 10. That is, the main memory system 190 may operate in the following manner: in order to store the WRITE DATA WRITE _ DATA identified as the second sequential pattern having a size larger than the second reference size into the first storage region 1102, the main memory system 190 transfers the WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as the second sequential pattern into the first memory system 110 to allow the first memory system 110 to process the WRITE command WRITE _ CMD, and in order to store the WRITE DATA WRITE _ CMD identified as the first sequential pattern having a size smaller than the second reference size and larger than the first reference size into the second storage region 1202, the main memory system 190 transfers the WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as the first sequential pattern into the second memory system 120 to allow the second memory system 120 to process the WRITE command WRITE _ CMD.

FIG. 11 illustrates another example of a command processing operation of data processing system 100 in accordance with an embodiment of the present disclosure.

Referring to FIG. 11, a read command processing operation of the command processing operations of data processing system 100 is described in detail. That is, a case where the input command IN _ CMD is the READ command READ _ CMD IN the command processing operation described above with reference to fig. 9B is described IN detail.

IN detail, when the input command IN _ CMD is a READ command READ _ CMD, the logical address READ _ LBA may be transferred from the host 102 to the main memory system 190 together with the READ command READ _ CMD.

After the logical address READ _ LBA is transferred from the host 102 to the main memory system 190 together with the READ command READ _ CMD in this manner, operation S4 may be started. That is, after the logical address READ _ LBA is transferred from the host 102 to the main memory system 190 together with the READ command READ _ CMD, the main memory system 190 may analyze the READ command READ _ CMD, and may START an operation of selecting a processing location of the READ command READ _ CMD according to the analysis result (S4 START).

According to an embodiment, the operation of analyzing the READ command READ _ CMD in the main memory system 190 may include an operation of checking a value of a logical address READ _ LBA corresponding to the READ command READ _ CMD. For example, the main memory system 190 may check whether the value of the logical address READ _ LBA corresponding to the READ command READ _ CMD is a logical address managed in the internal mapping information included in the main memory system 190, a logical address managed in the internal mapping information included in the first memory system 110, or a logical address managed in the internal mapping information included in the second memory system 120.

In more detail, as a result of checking the value of the logical address READ _ LBA corresponding to the READ command READ _ CMD, when the value of the logical address READ _ LBA is a logical address managed in the internal mapping information included in the first memory system 110, the main memory system 190 may perform operation S5 of transferring the READ command READ _ CMD to the first memory system 110 to allow the first memory system 110 to process the operation of the READ command READ _ CMD.

In detail, the main memory system 190 may transfer the READ command READ _ CMD and the logical address READ _ LBA transferred from the host 102 to the first memory system 110 as they are.

The first memory system 110 may READ DATA READ _ DATA in the first storage area 1102 in response to a READ command READ _ CMD transferred through the main memory system 190. In other words, the first memory system 110 may search for a physical address (not shown) mapped to a logical address READ _ LBA corresponding to the READ command READ _ CMD, and may READ DATA READ _ DATA in the first storage area 1102 by referring to the physical address found in the search.

The first memory system 110 may transfer the READ DATA READ _ DATA READ in the first storage area 1102 to the main memory system 190(ACK READ _ DATA).

The main memory system 190 may transmit READ DATA READ _ DATA received from the first memory system 110 to the host 102(ACK READ _ DATA).

As a result of checking the value of the logical address READ _ LBA corresponding to the READ command READ _ CMD, when the value of the logical address READ _ LBA is a logical address managed in the internal mapping information included in the second memory system 120, the main memory system 190 may perform operation S7 of transferring the READ command READ _ CMD to the second memory system 120 to allow the second memory system 120 to process the operation of the READ command READ _ CMD.

In detail, the main memory system 190 may transfer the READ command READ _ CMD and the logical address READ _ LBA transferred from the host 102 to the second memory system 120 as they are.

The second memory system 120 may READ DATA READ _ DATA in the second memory area 1202 in response to a READ command READ _ CMD transferred through the main memory system 190. In other words, the second memory system 120 may search for a physical address (not shown) mapped to the logical address READ _ LBA corresponding to the READ command READ _ CMD, and may READ the READ DATA READ _ DATA in the second storage area 1202 by referring to the physical address found in the search.

The second memory system 120 may transfer the READ DATA READ _ DATA READ in the second storage area 1202 to the main memory system 190(ACK READ _ DATA).

The main memory system 190 may transmit READ DATA READ _ DATA received from the secondary memory system 120 to the host 102(ACK READ _ DATA).

Further, as a result of checking the value of the logical address READ _ LBA corresponding to the READ command READ _ CMD, when the value of the logical address READ _ LBA is a logical address managed in the internal mapping information included in the main memory system 190, the main memory system 190 may perform operation S9, that is, an operation of processing the READ command READ _ CMD by itself in the main memory system 190.

In detail, the main memory system 190 may READ DATA READ _ DATA in the main memory area 1902 in response to a READ command READ _ CMD transmitted from the host 102. In other words, the main memory system 190 may search for a physical address (not shown) mapped to the logical address READ _ LBA corresponding to the READ command READ _ CMD, and may READ the READ DATA READ _ DATA in the main memory area 1902 by referring to the physical address found in the search.

The main memory system 190 may transmit READ DATA READ _ DATA READ in the main memory area 1902 to the host 102(ACK READ _ DATA).

As a result of checking the value of the logical address READ _ LBA corresponding to the READ command READ _ CMD in the main memory system 190 as described above, the main memory system 190 may process the READ command READ _ CMD by itself, transfer the READ command READ _ CMD to the first memory system 110 to allow the first memory system 110 to process the READ command READ _ CMD, or transfer the READ command READ _ CMD to the second memory system 120 to allow the second memory system 120 to process the READ command READ _ CMD, and then the operation S4 may END (S4 END).

FIG. 12 illustrates operations in data processing system 100 to manage at least a plurality of memory systems based on logical addresses in accordance with an embodiment of the present disclosure.

Referring to fig. 12, the respective components included in the data processing system 100, i.e., the host 102, the main memory system 190, the first memory system 110, and the second memory system 120 manage logical addresses.

In detail, the main memory system 190 may generate and manage an internal mapping table LBAM/PBAM that maps a main physical address PBAM and a main logical address LBAM corresponding to the main memory area 1902 with each other.

The first memory system 110 may generate and manage internal mapping tables LBA1/PBA1 in which the first physical address PBA1 and the first logical address LBA1 corresponding to the first storage region 1102 are mapped to each other.

The second memory system 120 may generate and manage the internal mapping tables LBA2/PBA2 that map the second physical address PBA2 and the second logical address LBA2 corresponding to the second storage region 1202 to each other.

The host 102 may use a sum logical address ALL _ LBA obtained by summing the master logical address LBAM, the first logical address LBA1, and the second logical address LBA 2.

The size of the storage area in which data can be stored in each of the main memory system 190, the first memory system 110, and the second memory system 120 may be determined in advance in the process of manufacturing each of the main memory system 190, the first memory system 110, and the second memory system 120. For example, it may be determined in advance, for example, during manufacturing of the memory systems 190, 110, 120, that the size of the main storage area 1902 included in the main memory system 190 is 128 gigabytes, the size of the first storage area 1102 included in the first memory system 110 is 512 gigabytes, and the size of the second storage area 1202 included in the second memory system 120 is 1T-byte. In order for the host 102 to normally read data from or write data to the main storage area 1902, the first storage area 1102, and the second storage area 1202 included in the main memory system 190, the first memory system 110, and the second memory system 120, respectively, the size of each of the main storage area 1902, the first storage area 1102, and the second storage area 1202 should be shared with the host 102. That is, the host 102 needs to know the size of each of the main storage area 1902, the first storage area 1102, and the second storage area 1202 included in the main memory system 190, the first memory system 110, and the second memory system 120, respectively. When the host 102 knows the size of each of the main storage area 1902, the first storage area 1102, and the second storage area 1202 included in the main memory system 190, the first memory system 110, and the second memory system 120, the host 102 knows the range of the sum logical address ALL _ LBA obtained by summing the range of the main logical address LBAM corresponding to the main storage area 1902, the range of the first logical address LBA1 corresponding to the first storage area 1102, and the range of the second logical address LBA2 corresponding to the second storage area 1202.

As described above with reference to fig. 9A, in the initial operation period (INIT), the main memory system 190 may set not only the mapping unit of the internal mapping information of the main memory system 190 as the reference size unit, but also the mapping units of the internal mapping information of the first and second memory systems 110 and 120 as the first and second size units, respectively. By doing so, the main memory system 190 can set not only the range of the main logical address LBAM corresponding to the main storage area 1902 included in the main memory system 190, but also the ranges of the first logical address LBA1 and the second logical address LBA2 corresponding to the first and second storage areas 1102 and 1202 included in the first and second memory systems 110 and 120, respectively. In other words, in the initial operation period (INIT), the main memory system 190 may set the range of the main logical address LBAM corresponding to the main storage area 1902, the range of the first logical address LBA1 corresponding to the first storage area 1102, and the range of the second logical address LBA2 corresponding to the second storage area 1202 differently from each other. When the range of the master logical address LBAM, the range of the first logical address LBA1, and the range of the second logical address LBA2 are different from each other, the range of the master logical address LBAM, the range of the first logical address LBA1, and the range of the second logical address LBA2 do not overlap and are continuous with each other. After setting the range of first logical addresses LBA1 corresponding to the first storage region 1102 in the initial operation period (INIT), the main memory system 190 may share the range of first logical addresses LBA1 with the first memory system 110. After setting the range of the second logical address LBA2 corresponding to the second storage region 1202, the main memory system 190 may share the range of the second logical address LBA2 with the second memory system 120. After setting the range of the main logical address LBAM corresponding to the main storage area 1902, the range of the first logical address LBA1 corresponding to the first storage area 1102, and the range of the second logical address LBA2 corresponding to the second storage area 1202 differently from each other in the initial operation period (INIT), the main memory system 190 may share the sum logical address ALL _ LBA, which is obtained by summing the range of the main logical address LBAM, the range of the first logical address LBA1, and the range of the second logical address LBA2, with the host 102.

For reference, as shown in the drawing, after a Flash Translation Layer (FTL)403, which may be logically included in a main controller 1303 included in the main memory system 190, sets a range of a main logical address LBAM, a range of a first logical address LBA1, and a range of a second logical address LBA2, the main memory system 190 may generate and manage an internal mapping table LBAM/PBAM in which the main physical address PBAM and the main logical address LBAM corresponding to the main memory area 1902 are mapped to each other. Likewise, as shown in the drawing, in the first memory system 110, a Flash Translation Layer (FTL)401, which may be logically included in a first controller 1301 included in the first memory system 110, may generate and manage an internal mapping table LBA1/PBA1 in which a first logical address LBA1 whose range is set by the main memory system 190 and a first physical address PBA1 corresponding to the first storage region 1102 are mapped to each other in the internal mapping table LBA1/PBA 1. Further, as shown in the drawing, in the second memory system 120, a Flash Translation Layer (FTL)402, which may be logically included in a second controller 1302 included in the second memory system 120, may generate and manage an internal mapping table LBA2/PBA2 in which a second logical address LBA2 whose range is set by the main memory system 190 and a second physical address PBA2 corresponding to the second storage region 1202 are mapped to each other in the internal mapping table LBA2/PBA 2.

Fig. 13A and 13B illustrate examples of logical address based command processing operations in data processing system 100, according to embodiments of the present disclosure.

Referring to fig. 13A and 13B, a write command processing operation of the command processing operation based on the logical address in the data processing system 100 is described in detail. That is, in addition to the operation of processing the WRITE command WRITE _ CMD described above with reference to fig. 9B and 10, the operation of processing the WRITE command WRITE _ CMD based on the logical address is described in detail.

Referring to fig. 13A, the second storage area 1202 included in the second memory system 120 is larger than the first storage area 1102 included in the first memory system 110. In other words, the operations to be described with reference to fig. 13A are described in the following context: the second size unit, which is the mapping unit of the internal mapping information indicating the mapping relationship between the physical address and the logical address of the second storage area 1202, is larger than the first size unit, which is the mapping unit of the internal mapping information indicating the mapping relationship between the physical address and the logical address of the first storage area 1102 (yes in SD 1). Accordingly, in fig. 13A, in order to store the WRITE DATA WRITE _ DATA identified as the second sequential pattern having a size larger than the second reference size into the second storage area 1202, the main memory system 190 may transfer the WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as the second sequential pattern to the second memory system 120 to allow the second memory system 120 to process the WRITE command WRITE _ CMD. Further, in fig. 13A, in order to store the WRITE DATA WRITE _ DATA identified as the first sequential pattern having a size smaller than the second reference size and larger than the first reference size into the first storage area 1102, the main memory system 190 may transfer a WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as the first sequential pattern to the first memory system 110 to allow the first memory system 110 to process the WRITE command WRITE _ CMD. Further, in fig. 13A, in order to store the WRITE DATA WRITE _ DATA identified as a random pattern having a size smaller than the first reference size into the main storage area 1902, the main memory system 190 may process a WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as a random pattern by itself.

In detail, before comparing the second size unit with the first size unit (SD1), the main memory system 190 may check whether WRITE DATA WRITE _ DATA input from the host 102 together with the WRITE command WRITE _ CMD is a random pattern (SC 9). As a result of the operation of SC9, when WRITE DATA WRITE _ DATA input from the host 102 together with the WRITE command WRITE _ CMD is in a random mode ("random" in SC9), the main memory system 190 may store the WRITE DATA WRITE _ DATA in the main memory area 1902 in response to the WRITE command WRITE _ CMD input from the host 102 and the first input logical address (SC 8). In response to the WRITE command WRITE _ CMD, the main memory system 190 may map a specific physical address indicating a specific physical area capable of storing DATA in the main storage area 1902 to the first input logical address, and then may store the WRITE DATA WRITE _ DATA transferred from the host 102 into the specific physical area.

In a state where the second size unit is set to be larger than the first size unit (yes in SD1), the main memory system 190 may check what pattern the WRITE DATA WRITE _ DATA input from the host 102 together with the WRITE command WRITE _ CMD has (SD 2).

As a result of the operation of the SD2, when the WRITE DATA WRITE _ DATA input from the host 102 together with the WRITE command WRITE _ CMD is in the second sequential mode ("second order" in the SD2), the main memory system 190 may check the value of the first input logical address input together with the WRITE command WRITE _ CMD (SD 3).

As a result of the operation of the SD3, when the value of the first input logical address input together with the WRITE command WRITE _ CMD is included within the range of the second logical address LBA2 managed in the second memory system 120 ("included within LBA 2" in SD3), the main memory system 190 may transfer the WRITE command WRITE _ CMD, the first input logical address, and the WRITE DATA WRITE _ DATA input from the host 102 to the second memory system 120 so that the WRITE DATA WRITE _ DATA may be stored into the second storage area 1202 included in the second memory system 120 (SD 5). In response to the WRITE command WRITE _ CMD, the second memory system 120 may map a specific physical address indicating a specific physical area capable of storing DATA in the second storage area 1202 to the first input logical address, and then may store the WRITE DATA WRITE _ DATA transferred from the main memory system 190 into the specific physical area.

As a result of the operation of the SD3, when the value of the first input logical address input together with the WRITE command WRITE _ CMD is included in the range of the first logical address LBA1 managed in the first memory system 110 ("included in LBA 1" in SD3), the main memory system 190 may manage the first intermediate logical address included in the range of the second logical address LBA2 managed in the second memory system 120 as intermediate mapping information by mapping the first intermediate logical address to the first input logical address input from the host 102 (SD 6). The main memory system 190 may transfer the first intermediate logical address, the value of which is determined by the operation SD6, to the second memory system 120 together with the WRITE command WRITE _ CMD and the WRITE DATA WRITE _ DATA, so that the WRITE DATA WRITE _ DATA may be stored into the second storage area 1202 included in the second memory system 120 (SD 8). In response to the WRITE command WRITE _ CMD, the second memory system 120 may map a specific physical address indicating a specific physical area capable of storing DATA in the second storage area 1202 to the first intermediate logical address, and then may store the WRITE DATA WRITE _ DATA transferred from the main memory system 190 into the specific physical area.

As a result of the operation of the SD2, when the WRITE DATA WRITE _ DATA input from the host 102 together with the WRITE command WRITE _ CMD is in the first order mode ("first order" in the SD2), the main memory system 190 may check the value of the first input logical address input together with the WRITE command WRITE _ CMD (SD 4).

As a result of the operation of the SD4, when the value of the first input logical address input together with the WRITE command WRITE _ CMD is included in the range of the second logical address LBA2 managed in the second memory system 120 ("included in LBA 2" in SD4), the main memory system 190 may manage the second intermediate logical address included in the range of the first logical address LBA1 managed in the first memory system 110 as intermediate mapping information by mapping the second intermediate logical address to the first input logical address input from the host 102 (SD 7). The main memory system 190 may transfer the second intermediate logical address, the value of which is determined by the operation SD7, to the first memory system 110 together with the WRITE command WRITE _ CMD and the WRITE DATA WRITE _ DATA, so that the WRITE DATA WRITE _ DATA may be stored into the first storage area 1102 included in the first memory system 110 (SD 9). In response to the WRITE command WRITE _ CMD, the first memory system 110 may map a specific physical address indicating a specific physical area capable of storing DATA in the first storage area 1102 to the second intermediate logical address, and may then store the WRITE DATA WRITE _ DATA transferred from the main memory system 190 into the specific physical area.

As a result of the operation of the SD4, when the value of the first input logical address input together with the WRITE command WRITE _ CMD is included within the range of the first logical address LBA1 managed in the first memory system 110 ("included within LBA 1" in SD4), the main memory system 190 may transfer the WRITE command WRITE _ CMD, the first input logical address, and the WRITE DATA WRITE _ DATA input from the host 102 to the first memory system 110 so that the WRITE DATA WRITE _ DATA may be stored into the first storage region 1102 included in the first memory system 110 (SD 0). In response to the WRITE command WRITE _ CMD, the first memory system 110 may map a specific physical address indicating a specific physical area capable of storing DATA in the first storage area 1102 to the first input logical address, and then may store the WRITE DATA WRITE _ DATA transferred from the main memory system 190 into the specific physical area.

Briefly, even if the first input logical address input together with the WRITE command WRITE _ CMD is within the range of the second logical address LBA2 managed in the second memory system 120, when the WRITE DATA WRITE _ DATA is stored in the first storage region 1102 according to the pattern of the WRITE DATA WRITE _ DATA and the comparison of the first size unit and the second size unit, the main memory system 190 may generate and manage intermediate mapping information to map the first input logical address within the range of the second logical address LBA2 to the second intermediate logical address within the range of the first logical address LBA1 managed in the first memory system 110.

Also, even if the first input logical address input together with the WRITE command WRITE _ CMD is within the range of the first logical address LBA1 managed in the first memory system 110, when the WRITE DATA WRITE _ DATA is stored in the second storage area 1202 in the second memory system 120 according to the pattern of the WRITE DATA WRITE _ DATA and the comparison of the first size unit and the second size unit, the main memory system 190 may generate and manage intermediate mapping information to map the first input logical address within the range of the first logical address LBA1 to the first intermediate logical address within the range of the second logical address LBA2 managed in the second memory system 120.

Referring to fig. 13B, the second storage area 1202 included in the second memory system 120 is smaller than the first storage area 1102 included in the first memory system 110. In other words, the operations to be described with reference to fig. 13B are described in the following context: the second size unit, which is the mapping unit of the internal mapping information indicating the mapping relationship between the physical address and the logical address of the second storage area 1202, is smaller than the first size unit, which is the mapping unit of the internal mapping information indicating the mapping relationship between the physical address and the logical address of the first storage area 1102 (no in SD 1). Accordingly, in fig. 13B, in order to store the WRITE DATA WRITE _ DATA identified as the second sequential pattern having a size larger than the second reference size into the first storage area 1102, the main memory system 190 may transfer the WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as the second sequential pattern to the first memory system 110 to allow the first memory system 110 to process the WRITE command WRITE _ CMD. Further, in fig. 13B, in order to store the WRITE DATA WRITE _ DATA identified as the first sequential pattern having a size smaller than the second reference size and larger than the first reference size into the second storage area 1202, the main memory system 190 may transfer a WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as the first sequential pattern to the second memory system 120 to allow the second memory system 120 to process the WRITE command WRITE _ CMD. Further, in fig. 13B, in order to store the WRITE DATA WRITE _ DATA identified as a random pattern having a size smaller than the first reference size into the main storage area 1902, the main memory system 190 may process a WRITE command WRITE _ CMD corresponding to the WRITE DATA WRITE _ DATA identified as a random pattern by itself.

In detail, before comparing the second size unit with the first size unit (SD1), the main memory system 190 may check whether WRITE DATA WRITE _ DATA input from the host 102 together with the WRITE command WRITE _ CMD is a random pattern (SC 9). As a result of the operation of SC9, when WRITE DATA WRITE _ DATA input from the host 102 together with the WRITE command WRITE _ CMD is in a random mode ("random" in SC9), the main memory system 190 may store the WRITE DATA WRITE _ DATA in the main memory area 1902 in response to the WRITE command WRITE _ CMD input from the host 102 and the first input logical address (SC 8). In response to the WRITE command WRITE _ CMD, the main memory system 190 may map a specific physical address indicating a specific physical area capable of storing DATA in the main storage area 1902 to the first input logical address, and then may store the WRITE DATA WRITE _ DATA transferred from the host 102 into the specific physical area.

In a state where the second size unit is set smaller than the first size unit (no in SD1), the main memory system 190 may check which pattern the WRITE DATA WRITE _ DATA input from the host 102 together with the WRITE command WRITE _ CMD has (SE 2).

As a result of the operation of SE2, when the WRITE DATA WRITE _ DATA input from the host 102 together with the WRITE command WRITE _ CMD is in the first order mode ("first order" in SE2), the main memory system 190 may check the value of the first input logical address input together with the WRITE command WRITE _ CMD (SE 3).

As a result of the operation of SE3, when the value of the first input logical address input together with the WRITE command WRITE _ CMD is included within the range of the second logical address LBA2 managed in the second memory system 120 ("included within LBA2 in SE 3"), the main memory system 190 may transfer the WRITE command WRITE _ CMD, the first input logical address, and the WRITE DATA WRITE _ DATA input from the host 102 to the second memory system 120 so that the WRITE DATA WRITE _ DATA may be stored into the second storage area 1202 included in the second memory system 120 (SE 5). In response to the WRITE command WRITE _ CMD, the second memory system 120 may map a specific physical address indicating a specific physical area capable of storing DATA in the second storage area 1202 to the first input logical address, and then may store the WRITE DATA WRITE _ DATA transferred from the main memory system 190 into the specific physical area.

As a result of the operation of SE3, when the value of the first input logical address input together with the WRITE command WRITE _ CMD is included in the range of the first logical address LBA1 managed in the first memory system 110 ("included in LBA 1" in SE3), the main memory system 190 may manage a third intermediate logical address included in the range of the second logical address LBA2 managed in the second memory system 120 as intermediate mapping information by mapping the third intermediate logical address to the first input logical address input from the host 102 (SE 6). The main memory system 190 may transfer the third intermediate logical address, the value of which is determined by the operation SE6, to the second memory system 120 together with the WRITE command WRITE _ CMD and the WRITE DATA WRITE _ DATA, so that the WRITE DATA WRITE _ DATA may be stored into the second storage area 1202 included in the second memory system 120 (SE 8). In response to the WRITE command WRITE _ CMD, the second memory system 120 may map a specific physical address indicating a specific physical area capable of storing DATA in the second storage area 1202 to the third intermediate logical address, and then may store the WRITE DATA WRITE _ DATA transferred from the main memory system 190 into the specific physical area.

As a result of the operation of SE2, when the WRITE DATA WRITE _ DATA input together with the WRITE command WRITE _ CMD from the host 102 is in the second sequential mode ("second order" in SE2), the main memory system 190 may check the value of the first input logical address input together with the WRITE command WRITE _ CMD (SE 4).

As a result of the operation of SE4, when the value of the first input logical address input together with the WRITE command WRITE _ CMD is included in the range of the second logical address LBA2 managed in the second memory system 120 ("included in LBA 2" in SD4), the main memory system 190 may manage the fourth intermediate logical address included in the range of the first logical address LBA1 managed in the first memory system 110 as intermediate mapping information by mapping the fourth intermediate logical address to the first input logical address input from the host 102 (SE 7). The main memory system 190 may transfer the fourth intermediate logical address, the value of which is determined by the operation SE7, to the first memory system 110 together with the WRITE command WRITE _ CMD and the WRITE DATA WRITE _ DATA, so that the WRITE DATA WRITE _ DATA may be stored into the first storage area 1102 included in the first memory system 110 (SE 9). In response to the WRITE command WRITE _ CMD, the first memory system 110 may map a specific physical address indicating a specific physical area capable of storing DATA in the first storage area 1102 to the fourth intermediate logical address, and may then store the WRITE DATA WRITE _ DATA transferred from the main memory system 190 into the specific physical area.

As a result of the operation of SE4, when the value of the first input logical address input together with the WRITE command WRITE _ CMD is included within the range of the first logical address LBA1 managed in the first memory system 110 ("included within LBA 1" in SE4), the main memory system 190 may transfer the WRITE command WRITE _ CMD, the first input logical address, and the WRITE DATA WRITE _ DATA input from the host 102 to the first memory system 110 so that the WRITE DATA WRITE _ DATA may be stored into the first storage region 1102 included in the first memory system 110 (SE 0). In response to the WRITE command WRITE _ CMD, the first memory system 110 may map a specific physical address indicating a specific physical area capable of storing DATA in the first storage area 1102 to the first input logical address, and then may store the WRITE DATA WRITE _ DATA transferred from the main memory system 190 into the specific physical area.

Briefly, even if the first input logical address input together with the WRITE command WRITE _ CMD is within the range of the second logical address LBA2 managed in the second memory system 120, when the WRITE DATA WRITE _ DATA is stored in the first storage region 1102 according to the pattern of the WRITE DATA WRITE _ DATA and the comparison of the first size unit and the second size unit, the main memory system 190 may generate and manage intermediate mapping information to map the first input logical address within the range of the second logical address LBA2 to the third intermediate logical address within the range of the first logical address LBA1 managed in the first memory system 110.

Also, even if the first input logical address input together with the WRITE command WRITE _ CMD is within the range of the first logical address LBA1 managed in the first memory system 110, when the WRITE DATA WRITE _ DATA is stored in the second storage area 1202 included in the second memory system 120 according to the pattern of the WRITE DATA WRITE _ DATA and the comparison of the first size unit and the second size unit, the main memory system 190 may generate and manage intermediate mapping information to map the first input logical address within the range of the first logical address LBA1 to the fourth intermediate logical address within the range of the second logical address LBA2 managed in the second memory system 120.

FIG. 14 illustrates another example of a logical address based command processing operation in data processing system 100, according to an embodiment of the present disclosure.

Referring to FIG. 14, a read command processing operation of the logical address based command processing operation in data processing system 100 is described in detail. That is, in addition to the operation of processing the READ command READ _ CMD described above with reference to fig. 9B and 11, the operation of processing the READ command READ _ CMD based on the logical address is described in detail.

In detail, the main memory system 190 may check whether a second input logical address input from the host 102 together with the READ command READ _ CMD is detected in the intermediate mapping information (SF 0).

As described above with reference to fig. 13A and 13B, even if the first input logical address input together with the WRITE command WRITE _ CMD is included in the range of the second logical address LBA2 managed in the second memory system 120, when the WRITE DATA WRITE _ DATA is stored in the first storage region 1102 according to the pattern of the WRITE DATA WRITE _ DATA and the comparison result of the first size unit and the second size unit, the main memory system 190 may generate and manage intermediate mapping information to map the first input logical address included in the range of the second logical address LBA2 to an intermediate logical address included in the range of the first logical address LBA1 managed in the first memory system 110.

Also, even if the first input logical address input together with the WRITE command WRITE _ CMD is included in the range of the first logical address LBA1 managed in the first memory system 110, when the WRITE DATA WRITE _ DATA is stored in the second storage region 1202 included in the second memory system 120 according to the pattern of the WRITE DATA WRITE _ DATA and the comparison result of the first size unit and the second size unit, the main memory system 190 may generate and manage intermediate mapping information to map the first input logical address included in the range of the first logical address LBA1 to an intermediate logical address included in the range of the second logical address LBA2 managed in the second memory system 120.

When a logical address mapped to a second input logical address input from the host 102 together with the READ command READ _ CMD is not detected in the intermediate mapping information (no in SF0), a READ operation may be performed based on the second input logical address.

In contrast, when a fifth intermediate logical address mapped to the second input logical address input from the host 102 together with the READ command READ _ CMD is detected in the intermediate mapping information (in SF0, "fifth intermediate logical address detected"), the READ operation may be performed based on the fifth intermediate logical address detected in the intermediate mapping information.

As a result of the operation of SF0, when a logical address mapped to a second input logical address input from the host 102 together with the READ command READ _ CMD is not detected in the intermediate mapping information (no in SF0), the main memory system 190 may check the value of the second input logical address (SF 1).

As a result of the operation of SF1, when the value of the second input logical address input together with the READ command READ _ CMD is included in the range of the main logical address LBAM managed in the main memory system 190 ("included in LBAM" in SF1), the main memory system 190 may READ DATA READ _ DATA in the main memory area 1902 in response to the READ command READ _ CMD and the second input logical address (SF 2). The main memory system 190 may search the internal mapping information for a specific physical address mapped to the second input logical address in response to the READ command READ _ CMD, and may READ DATA READ _ DATA in a specific physical area indicated by the specific physical address in the main memory area 1902.

As a result of the operation of SF1, when the value of the second input logical address input together with the READ command READ _ CMD is included in the range of the second logical address LBA2 managed in the second memory system 120 ("included in LBA 2" in SF1), the main memory system 190 may transfer the READ command READ _ CMD and the second input logical address to the second memory system 120, thereby reading the READ DATA READ _ DATA in the second storage region 1202 (SF 5). The second memory system 120 may search for a specific physical address mapped to the second input logical address in the internal mapping information in response to the READ command READ _ CMD, and may READ the READ DATA READ _ DATA in a specific physical area indicated by the specific physical address in the second storage area 1202.

As a result of the operation of SF1, when the value of the second input logical address input together with the READ command READ _ CMD is included in the range of the first logical address LBA1 managed in the first memory system 110 ("included in LBA 1" in SF1), the main memory system 190 may transfer the READ command READ _ CMD and the second input logical address to the first memory system 110, thereby reading the READ DATA READ _ DATA in the first storage region 1102 (SF 3). The first memory system 110 may search for a specific physical address mapped to the second input logical address in the internal mapping information in response to the READ command READ _ CMD, and may READ DATA READ _ DATA in a specific physical area indicated by the specific physical address in the first storage area 1102.

As a result of the operation of the SF0, when a fifth intermediate logical address mapped to the second input logical address input from the host 102 together with the READ command READ _ CMD is detected in the intermediate mapping information ("fifth intermediate logical address detected" in SF0), the main memory system 190 may check the value of the fifth intermediate logical address (SF 4).

As a result of the operation of the SF4, when the value of the fifth intermediate logical address detected in the intermediate mapping information is included in the range of the second logical address LBA2 managed in the second memory system 120 ("included in LBA 2" in SF4), the main memory system 190 may transmit the READ command READ _ CMD and the fifth intermediate logical address to the second memory system 120, thereby reading the READ DATA READ _ DATA in the second storage area 1202 (SF 6). The second memory system 120 may search for a specific physical address mapped to the fifth intermediate logical address in the internal mapping information in response to the READ command READ _ CMD, and may READ the READ DATA READ _ DATA in a specific physical area indicated by the specific physical address in the second storage area 1202.

As a result of the operation of the SF4, when the value of the fifth intermediate logical address detected in the intermediate mapping information is included in the range of the first logical address LBA1 managed in the first memory system 110 ("included in LBA 1" in SF4), the main memory system 190 may transmit the READ command READ _ CMD and the fifth intermediate logical address to the first memory system 110, thereby reading the READ DATA READ _ DATA in the first storage area 1102 (SF 7). The first memory system 110 may search for a specific physical address mapped to the fifth intermediate logical address in the internal mapping information in response to the READ command READ _ CMD, and may READ DATA READ _ DATA in a specific physical area indicated by the specific physical address in the first storage area 1102.

As is apparent from the above description, according to the second embodiment of the present disclosure, when the main memory system 190, the first memory system 110, and the second memory system 120, which are physically separated from each other, are coupled to the host 102, the host 102 may use the physically separated main memory system 190, the first memory system 110, and the second memory system 120 logically similar to one memory system because a sum logical address ALL _ LBA obtained by summing up a main logical address LBAM corresponding to the main memory system 190, a first logical address LBA1 corresponding to the first memory system 110, and a second logical address LBA2 corresponding to the second memory system 120 is shared with the host 102.

In addition, when the main memory system 190, the first memory system 110, and the second memory system 120, which are physically separated from each other, are coupled to the host 102, roles of the respective main memory system 190, the first memory system 110, and the second memory system 120 may be determined according to a coupling relationship with the host 102, and according to the determined roles, size units and patterns of data stored in the respective main memory system 190, the first memory system 110, and the second memory system 120 may be determined differently from each other. For example, a random pattern of data having a relatively smaller (or smallest) size may be stored in the main memory system 190 coupled directly to the host 102, a first sequential data of an intermediate size may be stored in the first memory system 110 coupled to the host 102 through the main memory system 190, and a second sequential data of a relatively larger (or largest) size may be stored in the second memory system 120 coupled to the host 102 through the main memory system 190. As shown, by doing so, not only can the physically separate main memory system 190, first memory system 110, and second memory system 120 be used logically similar to one memory system, but also data stored in the respective main memory system 190, first memory system 110, and second memory system 120 can be efficiently processed.

While various embodiments have been shown and described, it will be apparent to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.

75页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:存储器控制器及其操作方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类