Memory system and operating method thereof

文档序号:1378463 发布日期:2020-08-14 浏览:15次 中文

阅读说明:本技术 存储器系统及其操作方法 (Memory system and operating method thereof ) 是由 吴用锡 于 2019-12-13 设计创作,主要内容包括:本公开提供一种存储器系统,该存储器系统联接到多个主机,每个主机包括FTL。该存储器系统可以包括:控制器,适于当设置了对来自多个主机中的任意一个主机的写入请求的写入锁定时,仅允许从该任意一个主机接收写入请求;以及存储器装置,由控制器控制,并且适于根据来自该任意一个主机的写入请求执行写入操作,其中该控制器包括:锁定管理器,适于根据存储器装置中是否设置了锁定来设置写入锁定并且在写入操作完成时释放写入锁定;以及同步管理器,适于根据写入操作是否成功执行,控制除该任意一个主机之外的其他主机的FTL的FTL元数据的同步。(The present disclosure provides a memory system coupled to a plurality of hosts, each host including an FTL. The memory system may include: a controller adapted to allow only a write request to be received from any one of the plurality of hosts when a write lock for the write request from the any one host is set; and a memory device controlled by the controller and adapted to perform a write operation according to a write request from the arbitrary one of the hosts, wherein the controller includes: a lock manager adapted to set a write lock according to whether a lock is set in the memory device and release the write lock when the write operation is completed; and a synchronization manager adapted to control synchronization of FTL metadata of FTLs of hosts other than the arbitrary one host according to whether the write operation is successfully performed.)

1. A memory system, comprising:

a plurality of storage regions, each storage region accessible by a plurality of hosts;

a memory storing executable instructions for adapting a file system to constraints limited by the plurality of storage regions; and

a processor in communication with the memory, the executable instructions when executed by the processor cause the processor to:

receiving a request for an operation from one of the plurality of hosts that is to access at least one of the plurality of storage areas;

when determining that the at least one storage area to be accessed is not locked currently, setting the at least one storage area to be accessed to be locked;

updating file system metadata to match file system metadata used by one of the plurality of hosts accessing the at least one storage region; and is

Updating a version value of file system metadata associated with the at least one storage region.

2. The memory system of claim 1, wherein the executable instructions further comprise instructions that, when executed by the processor, cause the processor to:

queuing another one of the plurality of hosts requesting an operation on the at least one storage region that one of the plurality of hosts is accessing.

3. The memory system of claim 2, wherein the executable instructions further comprise instructions that, when executed by the processor, cause the processor to:

when one of the plurality of hosts completes an operation, a lock release signal is sent to another one of the plurality of hosts.

4. The memory system of claim 1, wherein the executable instructions further comprise instructions that, when executed by the processor, cause the processor to:

notifying another one of the plurality of hosts requesting an operation on the at least one storage region that one of the plurality of hosts is accessing to the updated file system metadata and the updated version value.

5. The memory system of claim 1, wherein the executable instructions further comprise instructions that, when executed by the processor, cause the processor to:

upon receiving a request for an operation from one of the plurality of hosts, sending a current version value of the file system metadata.

6. The memory system of claim 1, wherein the lock comprises at least one of a write lock and a read lock, the write lock and the read lock instructing the at least one storage region to perform a write operation and a read operation, respectively.

7. The memory system of claim 1, wherein the file system metadata comprises metadata used by a Flash Translation Layer (FTL).

8. The memory system of claim 1, wherein the file system metadata comprises: an address mapping table storing a mapping between physical addresses and logical addresses associated with the requested operation.

9. The memory system of claim 8, wherein the file system metadata further comprises: block status information indicating whether the at least one storage area is available for a write operation.

10. The memory system of claim 1, wherein the file system metadata used by one of the plurality of hosts accessing the at least one storage region comprises an address mapping table for performing the requested operation.

11. A memory system, comprising:

a plurality of storage regions, each storage region accessible by a plurality of hosts;

a memory storing executable instructions for adapting a file system to constraints limited by the plurality of storage regions; and

a processor in communication with the memory, the executable instructions, when executed by the processor, cause the processor to:

receiving a request from a host of the plurality of hosts to access at least one of the plurality of storage areas, the request comprising a lock request for the at least one of the plurality of storage areas;

determining whether there is a conflict between the lock request and another lock currently set by another one of the plurality of hosts; and is

Upon determining that there is no conflict, setting a lock for the at least one storage region.

12. The memory system of claim 11, wherein determining whether a conflict exists comprises:

determining whether a lock request by one of the plurality of hosts and another lock currently set by another of the plurality of hosts are both for a write operation to at least one of the plurality of storage regions.

13. The memory system of claim 12, wherein the executable instructions further comprise instructions that, when executed by the processor, cause the processor to:

upon determining that both the lock request by one of the plurality of hosts and another lock currently set by another of the plurality of hosts are for a write operation, queuing the write operation requested by the one of the plurality of hosts to prevent the one of the plurality of hosts from accessing the at least one storage region.

14. The memory system of claim 13, wherein the executable instructions further comprise instructions that, when executed by the processor, cause the processor to:

updating file system metadata to match file system metadata used by another one of the plurality of hosts upon completion of an operation associated with another lock set by another one of the plurality of hosts; and is

Updating a version value of file system metadata associated with another lock set by another host of the plurality of hosts.

15. The memory system of claim 14, wherein the executable instructions further comprise instructions that, when executed by the processor, cause the processor to:

providing the updated file system metadata and the updated version value to one of the plurality of hosts requesting an operation on the at least one storage region that another of the plurality of hosts is accessing.

16. The memory system of claim 12, wherein the executable instructions further comprise instructions that, when executed by the processor, cause the processor to:

upon completion of an operation associated with another lock set by another one of the plurality of hosts, sending a lock release signal to the one of the plurality of hosts.

17. The memory system of claim 11, wherein determining whether a conflict exists comprises:

a read operation is determined whether a lock request by one of the plurality of hosts and another lock currently set by another one of the plurality of hosts are both directed to at least one of the plurality of storage regions.

18. The memory system of claim 17, wherein the executable instructions further comprise instructions that, when executed by the processor, cause the processor to:

causing one of the plurality of hosts to access the at least one storage area.

19. The memory system of claim 11, wherein the file system metadata comprises metadata used by a Flash Translation Layer (FTL).

20. The memory system of claim 11, wherein the file system metadata comprises: an address mapping table storing a mapping between physical addresses and logical addresses associated with the requested operation.

21. A memory system coupled to a plurality of hosts, each host including a Flash Translation Layer (FTL), the memory system comprising:

a controller that, when write lock is set for a write request from any one of the plurality of hosts, permits only the reception of the write request from the any one of the plurality of hosts; and

a memory device controlled by the controller and performing a write operation according to the write request from the arbitrary one host,

wherein the controller comprises:

a lock manager that, when a write lock request is received, sets a write lock according to whether a lock is set in the memory device, and releases the write lock when the write operation is completed; and

and the synchronization manager controls the synchronization of the FTL metadata of the FTLs of other hosts except the any host according to whether the write operation is successfully executed.

22. The memory system according to claim 21, wherein the lock manager sets the write lock in a corresponding write area in the memory device when a write lock or a read lock of another host that is not the arbitrary one of the plurality of hosts is not set in the write area corresponding to the write lock request.

23. The memory system according to claim 22, wherein the lock manager queues the write lock request or transmits a failure signal of the write lock request to the any one host when a write lock or a read lock of the another host is set in the corresponding write area.

24. The memory system of claim 23, wherein the lock manager queues the write lock request when an estimated time required until a write lock by the other host is released is less than a threshold.

25. The memory system of claim 21, wherein the controller only allows read requests to be received from one or more of the plurality of hosts when a read lock request for the read request is received from the one or more hosts.

26. The memory system according to claim 25, wherein when the read lock request is received, the lock manager sets a read lock according to whether a lock is set in the memory device, and when the memory device completes a read operation corresponding to the read request, the lock manager releases the read lock.

27. The memory system according to claim 26, wherein the lock manager sets the read lock in a corresponding read area in the memory device when a write lock of another host that is not the arbitrary one of the plurality of hosts is not set in the read area corresponding to the read lock request.

28. The memory system of claim 25, wherein when the write lock request is received, the controller receives a write command, write data, and update FTL metadata updated by a host included in the write request.

29. The memory system of claim 28, wherein the synchronization manager reflects the updated FTL metadata into the memory system according to whether the write operation was successfully performed, and controls synchronization of FTL metadata of the other hosts by transmitting the updated FTL metadata to the other hosts.

30. The memory system of claim 29, wherein the synchronization manager controls synchronization of the FTL metadata by transmitting the updated FTL metadata according to a request of the other host.

31. The memory system of claim 21, wherein the controller transmits a write lock release signal to the other host depending on whether the write operation is complete.

32. The memory system of claim 21, wherein the controller allows only write requests to be received from the any one host and then receives from the any one host a write command, write data, and updated FTL metadata updated by a host included in the write requests.

33. A method of operation of a memory system coupled to a plurality of hosts, each host including an FTL, the method of operation comprising:

setting a write lock according to whether a lock is set in a memory device when a write lock request is received from any one of the plurality of hosts;

when the write lock is set, receiving a write request only from the any one host, and executing a write operation according to the write request;

releasing the write lock when the write operation is completed; and is

And controlling the synchronization of the FTL metadata of the FTLs of other hosts except the any host according to whether the write operation is successfully executed.

34. The operating method of claim 33, wherein setting the write lock according to whether a lock is set in the memory device when the write lock request is received from the any one of the plurality of hosts comprises:

when a write lock or a read lock of another host that is not the arbitrary one of the plurality of hosts is not set in a write area in the memory device corresponding to the write lock request, the write lock is set in the corresponding write area.

35. The operating method of claim 34, wherein setting the write lock according to whether a lock is set in the memory device when the write lock request is received from the any one of the plurality of hosts further comprises:

queuing the write lock request or transmitting a failure signal of the write lock request to the any one host when a write lock or a read lock of the another host is set in the corresponding write area.

36. The operating method of claim 35, wherein queuing the write lock request or transmitting a failure signal of the write lock request to the any one host when a write lock or a read lock of the another host is set in the corresponding write area comprises:

queuing the write lock request when an estimated time required until a write lock of the other host is released is less than a threshold.

37. The method of operation of claim 33, further comprising:

setting a read lock according to whether a lock is set in the memory device when a read lock request is received from the arbitrary one of the hosts; and is

Releasing the read lock when the memory device completes a read operation corresponding to a read request.

38. The operating method of claim 37, wherein setting the read lock according to whether a lock is set in the memory device when the read lock request is received from the any one host comprises:

setting the read lock in a corresponding read area in the memory device when a write lock of another host that is not the arbitrary one of the plurality of hosts is not set in the read area corresponding to the read lock request.

39. The method of operation of claim 33, further comprising:

upon receiving the write lock request, receiving a write command, write data, and updated FTL metadata updated by a host included in the write request.

40. The method of operation of claim 39, wherein controlling synchronization of FTL metadata for FTLs of the other hosts other than the any one host, depending on whether the write operation was successfully performed, comprises:

reflecting the updated FTL metadata into the memory system according to whether the write operation was performed successfully; and is

Controlling synchronization of FTL metadata of the other host by transmitting the updated FTL metadata to the other host.

41. The method of operation of claim 40, wherein controlling synchronization of FTL metadata for the other host by transmitting the updated FTL metadata to the other host comprises:

controlling synchronization of the FTL metadata by transmitting the updated FTL metadata according to the other hosts' requests.

42. The method of operation of claim 33, wherein releasing the write lock when the write operation is complete comprises:

and transmitting a write lock release signal to the other hosts according to whether the write operation is completed.

43. The method of operation of claim 33, further comprising:

a write lock is set in response to a write lock request from the any one host, and then a write command, write data, and updated FTL metadata updated by the host included in the write request are received from the any one host.

Technical Field

Example embodiments relate to a memory system and a method of operating the same.

Background

Computer environment paradigms have turned to cloud computing and are evolving towards pervasive computing that enables computing systems to be used anytime and anywhere. The use of portable electronic devices such as mobile phones, digital cameras and laptop computers has increased rapidly. These portable electronic devices typically use a memory system having one or more memory devices to store data. Such a memory system may be used as a primary memory device or a secondary memory device of a portable electronic device.

A memory system using the semiconductor memory device provides advantages such as excellent stability, durability, high information access speed, and low power consumption since it has no moving parts. Examples of memory systems having these advantages include Universal Serial Bus (USB) memory devices, memory cards having various interfaces, and Solid State Drives (SSDs).

Disclosure of Invention

Embodiments of the present document relate to a computing environment with improved performance characteristics. In some embodiments, the systems and methods of operation of the present disclosure are implemented to support synchronization of host metadata, such as Flash Translation Layer (FTL) metadata, to keep multiple hosts sharing a memory system updated for changes in the FTL associated with the memory system.

In an embodiment, a memory system is provided. The memory system may include: a plurality of storage regions, each storage region accessible by a plurality of hosts; a memory storing executable instructions for adapting a file system to a constraint limited by a plurality of storage regions; and a processor in communication with the memory, the executable instructions when executed by the processor causing the processor to: receiving a request for an operation from a host that is to access at least one of the plurality of storage areas; when determining that at least one storage area to be accessed is not locked currently, setting the locking of the at least one storage area to be accessed; updating file system metadata to match file system metadata used by one of the hosts accessing the at least one storage region; and updating a version value of file system metadata associated with the at least one storage region.

In an embodiment, a memory system is provided. The memory system may include: a plurality of storage regions, each storage region accessible by a plurality of hosts; a memory storing executable instructions for adapting a file system to a constraint limited by a plurality of storage regions; and a processor in communication with the memory, the executable instructions when executed by the processor causing the processor to: receiving a request from one of the hosts to access at least one of the plurality of storage areas, the request including a lock request for the at least one of the plurality of storage areas; determining whether a conflict exists between the lock request and another lock currently set by another one of the hosts; and upon determining that there is no conflict, setting a lock for at least one storage region.

In an embodiment, a memory system is provided that is coupled to a plurality of hosts, each host including a Flash Translation Layer (FTL). The memory system may include: a controller adapted to allow only a write request to be received from any one of the plurality of hosts when a write lock for the write request from the any one host is set; and a memory device controlled by the controller and adapted to perform a write operation according to a write request from the arbitrary one of the hosts, wherein the controller includes: a lock manager adapted to set a write lock according to whether a lock is set in the memory device when a write lock request is received, and to release the write lock when the write operation is completed; and a synchronization manager adapted to control synchronization of FTL metadata of FTLs of hosts other than the arbitrary one host according to whether the write operation is successfully performed.

In an embodiment, a method of operating a memory system coupled to a plurality of hosts, each host including an FTL, is provided. The operation method may include: setting a write lock according to whether a lock is set in the memory device when a write lock request is received from any one of the plurality of hosts; when the write lock is set, receiving a write request only from the arbitrary one of the hosts, and performing a write operation according to the write request; releasing the write lock when the write operation is complete; and controlling synchronization of FTL metadata of FTLs of hosts other than the arbitrary one host according to whether the write operation is successfully performed.

Drawings

FIG. 1 schematically illustrates an example of a computing environment 100 that provides multiple users access to a shared memory system.

FIG. 2 schematically illustrates an example of a computing environment 100 that includes an open channel memory system to support multiple ports.

FIG. 3 schematically illustrates the structure of a computing environment 100 in accordance with an embodiment.

Fig. 4A and 4B illustrate boot operations of a memory device and a plurality of hosts in communication with the memory device according to some embodiments.

Fig. 5A-5D illustrate how write operations for multiple hosts may be performed, according to some embodiments.

Fig. 6A and 6B illustrate read operations for multiple hosts in accordance with some embodiments.

FIG. 7 schematically illustrates data that may be included in a memory device and multiple hosts in accordance with some embodiments.

FIG. 8 schematically illustrates an example of a computing environment 100 in communication with a host for an open channel memory system that supports single root I/O virtualization (SR-IOV).

Fig. 9 schematically illustrates a user system implemented based on some embodiments of the disclosed technology.

Detailed Description

The techniques disclosed in this application document may be implemented in embodiments to provide an electronic system and method that, among other features and advantages, supports synchronization of host metadata for multiple hosts of a shared memory system.

FIG. 1 schematically illustrates an example of a computing environment 100 that provides multiple users access to a shared memory system.

In the computing environment 100, a host 110 communicates with a memory system 130. In some embodiments, host 100 may use memory system 130 as a data storage device for host 100.

Examples of the host 110 may include wireless electronic devices (e.g., portable electronic devices) such as mobile phones, MP3 players, and laptop computers, or wired electronic devices such as desktop computers, game machines, TVs, and projectors.

Memory system 130 may store data at the request of host 110. The memory system 130 may be used as a primary or secondary memory device for the host 110. Memory system 130 may include one or more types of storage devices. The computing environment 100 may also provide a host interface protocol for the host 110 to interface with other devices including storage devices.

Memory system 130 may include a memory device 170 and a controller 150 for controlling memory device 170. Memory device 170 may represent a data storage area that a host may access so that the host may store data temporarily or persistently in memory device 170.

Memory device 170 may include a non-volatile memory device such as a flash memory device. The flash memory may store data in memory cell transistors constituting a memory cell array. In some embodiments, the flash memory may be a NAND flash memory device. Flash memory may be organized into a hierarchy of dies, planes, blocks, and pages. In an embodiment, the flash memory may include multiple dies, and each die may receive one command at a time. Each die may include multiple planes, and the multiple planes may process commands received by the die in parallel. Each plane may include a plurality of blocks. In some embodiments, erase operations are performed on a block basis, and program (write) and read operations are performed on a page basis.

Flash memory can provide high read speeds. However, since the flash memory does not support the rewrite operation, an erase operation is required before the program operation in order to write data to the flash memory. When the host 110 uses the memory system 130 including flash memory, the file system controls how data is stored to and retrieved from the memory system 130. A hard disk drive has become a critical storage device, and thus a file system of the hard disk drive is used as a general file system. Memory systems having flash memory devices may utilize such a general file system, but such a general file system is not optimal for a number of reasons, such as erase block and wear leveling. For example, as described above, a flash memory block needs to be erased before it can be written to, and thus a memory system with a flash memory device needs to have information associated with the erased block, which is not needed by a hard disk drive. Thus, a Flash Translation Layer (FTL) may be used between the general file system and the flash memory. In some embodiments, when data stored in a block of a flash memory device is to be updated, the FTL writes a new copy of the changed data to a new block (e.g., an erased block) and remaps the address associated with the write operation. To write data to flash memory, the FTL may also map logical addresses of the file system to physical addresses of the flash memory.

Controller 150 may include a host interface 152, a processor 154, a memory 156, and a memory interface 158.

Host interface 152 may support interfacing between host 110 and memory system 130. For example, host 110 and memory system 130 may be electrically coupled through a port. For example, the host interface 152 may receive commands from the host 110 and transmit commands to the host 110 using an interface protocol such as PCI-E (peripheral component interconnect express), SAS (serial SCSI), or SATA (serial advanced technology attachment). The host interface 152 may support data input/output between the host 110 and the memory system 130.

The memory 156 may also store data associated with the operation of the memory system 130. In some embodiments, the memory 156 may store executable instructions. The memory 156 may include a buffer or cache to store such data.

Memory interface 158 may support an interface between controller 150 and memory devices 170. When memory device 170 is a flash memory, memory interface 158 may generate control signals for controlling memory device 170 and transmit the generated control signals to memory device 170, and processor 154 serving as a flash controller may manage the flow of the control signals. The memory interface 158 may support data input/output between the controller 150 and the memory device 170.

The memory interface 158 may include an ECC encoder and an ECC decoder (not shown). The ECC encoder may add parity bits to the data to be programmed to the memory device 170, and the ECC decoder may use the parity bits to detect and correct one or more erroneous data bits read from the memory device 170.

The processor 154 may control the overall operation of the memory system 130.

In an embodiment, host interface 152 and memory interface 158 may be loaded into an operating system that includes memory 156 and processor 154. Also for example, the host interface 152 and the memory interface 158 may be implemented as hardware devices such as Field Programmable Gate Arrays (FPGAs).

In some embodiments of the disclosed technology, the memory system 130 may be implemented in various platforms such as host-based Solid State Drives (SSDs) and open channel SSDs. It should be noted that in the context of this document, the term "open channel SSD" may refer to any data storage device shared by multiple hosts. With the open channel SSD, internal information such as information on a channel and a storage space is disclosed to the host 110, so that the host 110 can efficiently use resources of the open channel SSD. For example, the internal information may include information about a hierarchical structure such as a die, a plane, a block, and a page.

The host 110 using the open channel SSD as a storage device may include an FTL to access the memory device 170 by directly converting a logical address of a file system into a physical address of the memory device 170 based on internal information. In this document, the FTL metadata may be referred to as FTL metadata. The FLT metadata may include an address mapping table for storing a mapping between physical addresses and logical addresses and block state information indicating whether each block is an open block.

In some embodiments, prior to writing data to a physical block address, the host 110 may control the memory device 170 of the open channel SSD 130 to temporarily store data in the internal memory based on the internal information and the FTL metadata until the size of the write data reaches one page size. Subsequently, the host 110 maps the logical address to a corresponding physical address and sends a write request to the open channel SSD.

As described above, the term "open channel SSD" may refer to any data storage device shared by multiple hosts. An example of an open channel SSD can include any memory system that exposes its FTL metadata to any host 110 in communication therewith.

Memory system 130 according to some embodiments may be an open channel memory system. A host 110 using a memory system 130 as a storage device may include a file system 112, an FTL114, a memory 116, and a device driver 118. It should be noted that in the context of this document, the term "open channel memory system" may refer to any data storage device shared by multiple hosts.

As described above, the file system controls how data is stored to and retrieved from the memory system. As an example, file system 112 may manage data structures of an Operating System (OS). The file system 112 may specify the physical location of data to be stored in the memory system 130 based on the logical address.

As described above, the FTL114 can generate address mapping information associated with the mapping between logical addresses and physical addresses based on FTL metadata. The FTL114 can translate logical addresses of the file system 112 to physical addresses of the memory system 130 based on FTL metadata.

The FTL114 can generate read commands and write commands to control foreground operations of the memory device 170.

The FTL114 can perform background operations with respect to the memory device 170. The FTL114 can perform garbage collection operations by copying data in valid pages of memory blocks into free blocks and erasing those memory blocks.

Semiconductor memory devices such as NAND flash memories wear through (wear out) if data is written to the same address too frequently. The FTL114 can implement wear leveling to ensure that data erasure and writing are evenly distributed across the storage medium. Further, the FTL114 can perform address mapping while performing bad block management so that the host 110 does not access the bad blocks.

The memory 116 may store operational data of the host 110. For example, the memory 116 may store FTL metadata for the memory device 170 used for operation of the FTL 114.

The device driver 118 may control a memory system 130 coupled to the host 110. For example, the device driver 118 may communicate commands generated by the FTL114 to the memory system 130 to control read and write operations of the memory device 170. The device driver 118 may support data input/output between the host 110 and the memory system 130 using an interface protocol such as the host interface 152.

In an embodiment, the file system 112, the FTL114, and the device driver 118 can be loaded to a running system including the memory 116 of the host 110 and a processor (not shown) of the host 110. As another example, the file system 112, FTL114, and device driver 118 may be implemented as hardware devices such as FPGAs.

FIG. 2 schematically illustrates an example of a computing environment 100 that includes an open channel memory system to support multiple ports.

The computing environment 100 may include a memory system 130 and multiple hosts. Memory system 130 may include a controller 150 and a memory device 170.

In an embodiment, memory system 130 may be an open channel memory system that supports a multi-port interface.

Since the memory system 130 supports multiple ports, each of the multiple processors can independently communicate through a respective port or ports and share resources of the memory system 130.

For example, the memory system 130 may support a dual port interface. As shown in fig. 2, the first host 110a and the second host 110b may communicate through one of the dual ports, respectively, and share resources of the memory system 130.

Since the memory system 130 is an open channel memory system, each of the first host 110a and the second host 110b can access a physical address of the memory device 170. In order for each of the first host 110a and the second host 110b to access the memory device 170 through address translation, the first host 110a may include a first FTL114a and the second host 110b may include a second FTL114 b.

When the FTL of the first host 110a maps logical addresses to physical addresses to write data to the memory system 130, a portion of the FTL metadata may be updated to reflect the mapping. In this document, updating FTL metadata may include updating an address mapping table. In the event that the mapping of the first host 110a results in FTL metadata being updated, the second host 110b will end up with the wrong address mapping unless the updated FTL metadata is reflected in the internal FTL metadata of the second host 110 b.

In embodiments of the disclosed technology, the memory system 130 may control multiple hosts such that read requests or write requests from different hosts may be performed at different timings. In some embodiments of the disclosed technology, memory system 130 may maintain read locks and/or write locks to prevent multiple hosts from accessing memory system 130 simultaneously. For example, when a write lock request is received from a first host 110a of the plurality of hosts, the memory system 130 may set a write lock based on the lock state of the memory device 170, receive the write request from the first host 110a, and perform the write operation.

When the write lock is set, even if a write lock request or a read lock request is received from the second host 110b, the memory system 130 may block the write lock request or the read lock request until the write lock is released. When the write operation is complete, the memory system 130 may release the write lock and synchronize multiple hosts with the internal FTL metadata of the memory system 130. For example, when the write operation is complete and the memory system 130 releases the write lock associated with the first host 110a, the memory system 130 synchronizes the FTL metadata (e.g., address mapping table) of all other hosts with the updated metadata of the first host 110a (the updated metadata of the memory system 130).

In this way, the memory system 130 can prevent multiple hosts from performing write operations or both write and read operations simultaneously. In addition, the memory system 130 may prevent multiple hosts from having different metadata (e.g., different mapping tables) by synchronizing FTL metadata for the respective hosts. For example, the memory system 130 may prevent a host from reading undesired data from an erroneous address or performing a write operation to an erroneous storage area where data has been written, thereby improving the reliability of the computing environment 100.

FIG. 3 schematically illustrates the structure of a computing environment 100 in accordance with an embodiment.

The computing environment 100 may include a memory system 130 and a plurality of hosts coupled to the memory system 130. For convenience of description, fig. 3 shows that the memory system 130 communicates only with the first host 110a and the second host 110 b.

The first host 110a may include a first file system 112a, a first FTL114a, a first device driver 118a, and a first memory 116 a. The elements shown in fig. 3 may be the same as or similar to the elements shown in fig. 1. In this sense, the first file system 112a, the first FTL114a, the first memory 116a, and the first device driver 118a may correspond to the file system 112, the FTL114, the memory 116, and the device driver 118 shown in fig. 1, respectively.

The first memory 116a may store operation data of the first host 110 a. In particular, the first memory 116a may store FTL metadata for address translation of the first FTL114 a. The first file system 112a, the first FTL114a, and the first device driver 118a can be loaded to a first memory 116a and/or a processor (not shown) in the first host 110 a.

The second host 110b may include a second file system 112b, a second FTL114b, a second device driver 118b, and a second memory 116 b. As mentioned above, the elements shown in FIG. 3 may be the same as or similar to the elements shown in FIG. 1. In this sense, the second file system 112b, the second FTL114b, the second memory 116b, and the second device driver 118b may correspond to the file system 112, the FTL114, the memory 116, and the device driver 118 shown in fig. 1, respectively.

Memory system 130 may be an open channel memory system capable of supporting multiple ports. Memory system 130 may include a memory device 170 and a controller 150 for controlling memory device 170.

In an embodiment, controller 150 may include a host interface 152, a processor 154, a memory 156, a memory interface 158, a lock manager 160, and a synchronization manager 162. Host interface 152, processor 154, memory 156, and memory interface 158 may correspond to host interface 152, processor 154, memory 156, and memory interface 158 shown in fig. 1.

In an embodiment, memory device 170 may correspond to memory device 170 shown in fig. 1. Memory device 170 may store FTL metadata. The memory device 170 may store FTL metadata reflecting current address mapping information and block state information of the memory device 170. Memory device 170 may be a non-volatile memory device that can retain FTL metadata even when power is interrupted.

In an embodiment, lock manager 160 may set a write lock or a read lock in memory device 170 based on each host's lock request. When a write lock is set by a host, the controller 150 may allow only write requests of the corresponding host. When a certain host sets a read lock, the controller 150 may allow only a read request of the corresponding host.

In an embodiment, lock manager 160 may set only one write lock in the same storage area of memory device 170. That is, the lock manager 160 may control multiple write operations to be performed on the same storage area at different times.

In an embodiment, lock manager 160 may not set a read lock and a write lock in the same storage area at the same time. That is, the lock manager 160 may control that the write operation and the read operation are not performed on the same memory area at the same time.

In another embodiment, lock manager 160 may set two or more read locks in the same region at the same time. That is, lock manager 160 may allow multiple read operations to be performed on the same memory region at the same time.

In an embodiment, the storage area may correspond to the entire memory device 170. That is, lock manager 160 may control memory system 130 such that another write operation and another read operation are not performed in memory device 170 while a certain write operation is performed in memory device 170. As will be discussed below with reference to fig. 4A-6B, lock manager 160 may set a write lock or a read lock on the entire memory device 170 based on a lock request by a host.

In an embodiment, memory device 170 may include multiple storage areas. For example, when a storage area corresponds to one storage block of memory device 170 and a write operation is being performed on a certain storage block of memory device 170, lock manager 160 may control memory device 170 such that another write operation and another read operation cannot be performed on the same storage block of memory device 170, but another write operation and another read operation may be performed on another storage block of memory device 170. As will be discussed below with reference to FIG. 7, lock manager 160 may set a write lock or a read lock for each region of memory device 170.

The synchronization manager 162 may control synchronization of FTL metadata of multiple hosts and the memory system 130. For example, when the write lock is released after the write operation of the memory system 130 in response to the write request of the first host 110a has been completed, the synchronization manager 162 may transmit a write lock release signal to the plurality of hosts. In some embodiments, the sync manager 162 may communicate the updated FTL metadata to the second host 110 b.

Fig. 4A and 4B illustrate boot operations of the memory system 130, the first host 110a, and the second host 110B, in accordance with some embodiments of the disclosed technology.

Fig. 4A is a flowchart illustrating a booting operation of the memory system 130, the first host 110a, and the second host 110b illustrated in fig. 3.

Fig. 4B schematically illustrates initial data that may be stored in the memory system 130, the first host 110a, and the second host 110B illustrated in fig. 3. Specifically, fig. 4B illustrates data that may be stored in the first memory 116a included in the first host 110a, the second memory 116B included in the second host 110B, the memory 156 included in the controller 150 of the memory system 130, and the memory device 170. Other components that may be included in the computing environment 100 are omitted from fig. 4B.

Referring to fig. 4A, when the first host 110a is powered on, in step S402, the internal system of the first host 110a is reset based on a command included in a boot loader stored in an internal Read Only Memory (ROM) (not shown). The first host 110a may check whether the communication between the first host 110a and the memory system 130 is successfully established.

In step S404, the first host 110a may receive a boot image from, for example, a non-volatile storage device of the memory system 130. The boot image may indicate a computer file that allows the associated hardware to boot. For example, the boot image may include commands and data for booting the first file system 112a, the first FTL114a, and the first device driver 118 a.

In step S406, a processor (not shown) of the first host 110a may run an Operating System (OS) based on a command included in the boot image. The OS may include the first file system 112a, the first FTL114a, and the first device driver 118a shown in fig. 3.

Similar to the operation of the first host 110a in steps S402 to S406, the second host 110b may reset the internal system, receive the boot image from the memory system 130, and run the OS including the second file system 112b, the second FTL114b, and the second device driver 118b based on the received boot image in steps S408 to S412.

In step S414, the memory system 130 may store host information associated with the host currently using the memory system 130 in the memory 156 based on the host information received from the first host 110a and the second host 110 b.

In step S416, the memory system 130 may operate internal components such as the lock manager 160 and the synchronization manager 162. For example, host interface 152, memory interface 158, lock manager 160, and synchronization manager 162 may be implemented as firmware, and firmware operations may be performed by processor 154.

In step S418, the memory system 130 may reset the version value of the FTL metadata stored in the memory device 170.

The version value of FTL metadata can be used to synchronize FTL metadata updated by respective hosts. Each host may compare the version value of FTL metadata stored in the internal memory with the version value stored in the memory 156 to check whether the FTL metadata of the host is the latest version. When the FTL metadata of the host is not the latest version, synchronization is required to update the FTL metadata to the latest version.

As shown in fig. 4B, based on the host state information (e.g., "active" and "version 1") in the memory 156, the memory system 130 knows that: the first host 110a or the second host 110b is activated; and the version value of FTL metadata after reset. The lock status may indicate whether a read lock and a write lock are set in the memory system 130.

As shown in fig. 4A, the memory system 130 may transmit FTL metadata and a version value of the FTL metadata stored in the memory device 170 to the first host 110a and the second host 110b in steps S420 and S422.

As shown in fig. 4B, the first memory 116a and the second memory 116B may store FTL metadata received from the memory system 130 and a version value of the FTL metadata.

As described below with reference to fig. 5A-7, FTL metadata of respective hosts in communication with the memory system 130 may be synchronized after performing the boot operation described with reference to fig. 4A and 4B.

Fig. 5A-5D illustrate how write operations of the first host 110a and the second host 110b are performed based on some embodiments of the disclosed technology.

As shown in fig. 5A, in step S502, when write data is generated in the first host 110a, the write buffer of the first memory 116a holds the write data before starting a write operation to the memory system 130. For convenience of description, the write data generated by the first host 110a will be referred to as first write data hereinafter.

In an embodiment, the host may perform a write operation based on a certain data size, so the host's write buffer holds the write data until the write data reaches the certain data size. In another embodiment, the memory system 130 may have a write data buffer to temporarily store write data prior to a write operation to the main storage area. In step S504, when the size of the first write data in the write buffer reaches a size sufficient to perform a write operation, the first FTL114a may generate first address mapping information by mapping a logical address of the first write data to a physical address to which the first write data is to be written. In some embodiments, the data size for a write operation to memory system 130 may be one page of memory device 170. In some other embodiments, the data size for a write operation to the memory system 130 may be selected based on the number of blocks included in the super block, the number of pages that can be programmed by one-shot programming, and the size of one page.

In some embodiments, the first FTL114a may update the block status information when the open block is closed and the first write data is to be written to the new open block.

In step S506, the first FTL114a can transmit a first write lock request to the memory system 130 through the first device driver 118 a.

In step S508, the lock manager 160 of the memory system 130 may determine the lock status of the memory device 170 in response to the first write lock request. The lock manager 160 may access the memory 156 to determine the lock status.

When the determination result of step S508 indicates that any one of the read lock and the write lock has been set, the memory system 130 may not set the write lock in response to the first write lock request until the lock is released. The operation when the memory system 130 receives a write lock request when a lock is set will be described in detail in steps S534 to S566.

When the determination result of step S508 indicates that no read lock or write lock is currently set, in step S510, lock manager 160 may set a write lock to perform the first write operation. Once the write lock is set in response to a write lock request from the first host 110a, the controller 150 may only allow the write request to be received from the first host 110 a.

In step S512, the lock manager 160 may transmit a write lock setting signal indicating that the write lock has been set to the first host 110a through the host interface 152.

In step S514, the first FTL114a can transmit a first write request to the memory system 130 through the first device driver 118a requesting the memory system 130 to write first write data to the memory device 170.

In an embodiment, the first write request may include a first write command, first write data, and first update FTL metadata. The first updated FTL metadata may indicate a portion of FTL metadata of the first host 110a that has been updated to write the first write data. The first update FTL metadata may include first address mapping information. For example, the first address mapping information may include a mapping table that has been updated by the first host 110a performing the first write request. When the block state information is updated, the first update FTL metadata can further include an updated portion of the block state information.

In step S516, the processor 154 may transmit the first write command, the physical address included in the first update FTL metadata, and the first write data to the memory device 170 through the memory interface 158.

As shown in fig. 5B, after the operations of steps S502 to S516, the first host 110a sets the write lock, and the version value of the FTL metadata is set to 1. Since the first write operation has not been performed, the first update FTL metadata of the memory system 130 has not been updated even though the first update FTL metadata is received. The internal FTL metadata of the first and second memories 116a and 116b may also have a version value of 1.

In step S518, the memory device 170 may write the first write data to the physical address in response to the first write command.

When the first write operation is completed, the memory device 170 may transmit a write operation completion signal to the controller 150 through the memory interface 158 in step S520. Examples of the write operation completion signal may include a write operation success signal and a write operation failure signal.

When the controller 150 receives the write operation success signal, the synchronization manager 162 may reflect the first updated FTL metadata to the FTL metadata stored in the memory device 170 and update the version value of the FTL metadata stored in the memory 156.

As shown in fig. 5C, when step S518 is performed, the FTL metadata stored in the memory device 170 is updated and the version value of the FTL metadata is updated. In some embodiments, the version value of the FTL metadata stored in the first memory 116a remains unchanged at this stage.

In the event that the first write operation fails, the FTL metadata stored in the memory device 170 is not updated. Thus, when the first write operation fails, the version value of the FTL metadata remains unchanged.

In step S522, the lock manager 160 may release the write lock set by the first host 110 a.

In steps S524 and S526, the lock manager 160 may transmit a write lock release signal indicating that the write lock has been released to the first host 110a and the second host 111 b.

In step S528, upon receiving the write lock release signal, the first host 110a may determine whether the version value of the FTL metadata stored in the first memory 116a coincides with the version value of the FTL metadata stored in the memory 156. When the version values do not coincide with each other, the FTL metadata needs to be updated, and thus the first host 110a may update the FTL metadata stored in the first memory 116a to the first updated FTL metadata and may update the version values such that the version values stored in the memory 156 match the version values stored in the first memory 116 a.

In step S530, upon receiving the write lock release signal, the second host 110b may determine whether the version value of the FTL metadata stored in the second memory 116b and the version value of the FTL metadata stored in the memory 156 coincide. When the version values do not coincide with each other, the second host 110b may receive the first updated FTL metadata from the memory system 130, and may update the FTL metadata stored in the second memory 116b by reflecting the first updated FTL metadata and update the version values such that the version values stored in the memory 156 match the version values stored in the second memory 116b in step S532.

Address mapping information in FTL metadata is updated frequently, and block state information is updated infrequently. In an embodiment, a version value of the address mapping information (e.g., a version of the address mapping table) and a version value of the block state information are maintained separately. In this case, the synchronization manager 162 may update the version value of the address mapping information and the version value of the block state information, respectively. The first host 110a and the second host 110b may compare version values of their address mapping information and block state information with the version values stored in the memory 156 and independently synchronize their address mapping information and block state information.

As shown in fig. 5D, after step S532 is completed, the write lock is released, the version value of the FTL metadata in the memory system 130 is updated, and the first updated FTL metadata causes the FTL metadata of the first and second memories 116a and 116b to be updated.

The second host 110b may generate second write data. In step S534, the write buffer of the second memory 116b holds the second write data before starting the write operation to the memory system 130. In step S536, the second FTL114b may perform mapping between the logical address of the second write data and the physical address to which the second write data is to be written, and generate second updated FTL metadata.

In step S538, the second FTL114b can transmit a second write lock request to the memory system 130 through the second device driver 118 a.

In step S540, the lock manager 160 may determine the lock status of the memory device 170 stored in the memory 156 in response to the second write lock request. When the operation of step S538 is performed while the first host 110a and the memory system 130 are performing the operations of steps S510 to S522, the lock manager 160 may notify the second host 110b that a lock is set in the memory device 170. Thus, the second host 110b is not allowed to access the memory device 170.

In an embodiment, in step S542, the lock manager 160 may queue the second write lock request in the memory 156, and the second host 110b waits until the write lock is released.

After releasing the write lock in step S522 and transmitting signals to the respective hosts in steps S524, S526, and S532, the lock manager 160 may set the write lock in response to the queued second write lock request in step S544. Since the write lock is set in response to the write lock request from the second host 110b, the controller 150 may allow only the write request to be received from the second host 110 b.

In step S546, the lock manager 160 may transmit a write lock setup signal to the second host 110b through the host interface 152.

In step S548, the second host 110b may transmit the second write request to the memory system 130. In some embodiments, the second write request may include a second write command, second write data, and second update FTL metadata. The second updated FTL metadata may indicate the FTL metadata updated in step S536. Since the FTL metadata of the second host 110b is updated in step S530, the second FTL114b can regenerate the second updated FTL metadata by performing address mapping again, if necessary.

Steps S550 to S566 may correspond to steps S516 to S532. In short, in steps S550 to S566, the memory system 130 may write the second write data to the physical address included in the second update FTL metadata in response to the second write command, and may release the write lock when the write operation is completed. When the memory system 130 transmits a write lock release signal to each host, the hosts may synchronize the FTL metadata by comparing the version values of the FTL metadata.

In an embodiment, when the determination result of step S540 indicates that a lock is set in the memory system 130, the lock manager 160 may transmit a write lock failure signal to the second host 110b in step S542, instead of queuing the write lock request. The second host 110b may then retransmit the second write lock request.

In an embodiment, when the determination result of step S540 indicates that a lock is set in the memory system 130, the lock manager 160 may perform the operation of step S542 or may transmit a write lock failure signal according to a predetermined criterion.

For example, the lock manager 160 may perform the operation of step S542 when the estimated time required until the write lock is released is less than a threshold value, or transmit a write lock failure signal to the second host 110b when the estimated time is equal to or greater than the threshold value.

The lock manager 160 may decide the write time based on the size of the write data and the time required for garbage collection. Lock manager 160 may calculate an estimated time based on the write time, the time at which the write operation of memory system 130 began, and the current time.

Fig. 6A and 6B illustrate read operations for multiple hosts in accordance with some embodiments.

As shown in fig. 6A, when the first file system 112a intends to read data stored in the memory system 130, the first FTL114a can convert a logical address of the data to be read into a physical address by referring to FTL metadata in step S602. For convenience of description, the physical address is hereinafter referred to as a first physical address.

In step S604, the first FTL114a can transmit a first read lock request to the memory system 130 through the first device driver 118 a.

In step S606, lock manager 160 may determine whether a lock is currently set in memory system 130 based on the lock state of memory 156.

When the determination result of step S606 indicates that the write lock is set in the memory system 130, the memory system 130 may not allow the read operation of the first host 110a until the write lock is released.

In an embodiment, lock manager 160 may transmit a read lock failure signal to the host when a write lock is set in memory system 130. In another embodiment, lock manager 160 may queue the first read lock request when a write lock is set in memory system 130. In yet another embodiment, the lock manager 160 may selectively perform the operation of transmitting the read lock failure signal and the operation of queuing the first read lock request according to a predetermined criterion. For example, the predetermined criteria may include whether an estimated time required to release the write lock is less than a threshold.

On the other hand, when the determination result of step S606 indicates that the read lock is set or the lock is not set, the memory system 130 may set the read lock of the first host 110a in response to the first read lock request in step S608. When the read lock is set, the controller 150 may allow only the reception of the read request from the first host 110 a.

In step S610, the lock manager 160 may transmit a read lock setting signal indicating that the read lock is set to the first host 110a through the host interface 152.

In step S612, the first FTL114a can transmit a first read request requesting the memory system 130 to read data stored in a first physical address to the memory system 130 through the first device driver 118 a. The first read request may include a first read command and a first physical address.

In step S614, the processor 154 may transmit the first read command and the first physical address to the memory device 170 through the memory interface 158.

In step S616, the memory device 170 may read first read data stored in the first physical address in response to the first read command.

When the memory device 170 successfully reads the first read data in step S616, the memory device 170 may transfer the first read data to the controller 150 in step S618.

In step S620, the memory interface 158 may buffer or temporarily store the first read data in the memory 156.

In step S622, the processor 154 may transmit the first read data to the first host 110 a.

In step S624, the lock manager 160 may release the read lock set by the first host 110 a.

When the memory device 170 fails to read the first read data in step S616, the processor 154 may transmit a fail signal to the first host 110a, and the lock manager 160 may release the read lock set by the first host 110a in step S624.

Since multiple hosts communicate with the same memory system, each host may generate its own read request while the memory system is executing another read request from another host. For example, the second host 110b may generate the second read request while the first read request of the first host 110a is being executed. In step S626, the second FTL114b may translate the logical address into the physical address by referring to the FTL metadata. For convenience of description, the physical address will be referred to as a second physical address hereinafter.

In step S628, the second device driver 118a may transmit a second read lock request to the memory system 130.

In step S630, lock manager 160 may determine whether a lock is currently set in memory system 130 based on the lock state of memory 156.

In the case where the first host 110a sets the read lock, a second read lock request is received while the operations of steps S608 to S624 are performed. Since the FTL metadata may not be changed when the read operation is performed, the second read command operation may be performed while the first read command operation is performed.

Thus, in step S632, the lock manager 160 may set a read lock of the second host 110 b. In step S632, since the read locks of the first host 110a and the second host 110b are set, the controller 150 may allow only the read requests to be received from the first host 110a and the second host 110 b.

As shown in fig. 6B, after the operation of step S632 is performed, the read locks of the first host 110a and the second host 110B are set.

In step S634, the lock manager 160 may transmit a read lock setting signal to the second host 110 b.

In step S636, the second FTL114b can transmit the second read request to the memory system 130.

In some embodiments of the disclosed technology, steps S638-S648 may correspond to steps S614-S624. In an embodiment, in steps S638 to S648, the processor 154 may transmit the second read command and the second physical address to the memory device 170, and the memory device 170 may read the second read data stored in the second physical address in response to the second read command and transmit the second read data to the controller 150 when the read operation is successfully completed. Memory interface 158 may buffer or temporarily store the second read data in memory 156. When the processor 154 transfers the second read data to the second host 110b, the lock manager 160 may release the read lock set by the second host 110 b.

For convenience of description, although the memory device 170 is illustrated as performing the second read operation of step S640 after the first read operation of step S616 is completed, the memory device 170 may perform the first read operation and the second read operation simultaneously. For example, the first physical address and the second physical address belong to different dies, and the first read operation and the second read operation may be performed in parallel, so the memory device 170 may perform the first read operation and the second read operation simultaneously.

Some embodiments of the disclosed technology may provide optimized performance even in cases where more than one host attempts to access the same physical address. In the example discussed above, when the first read command operation is completed, if the second physical address to be accessed by the queued second read command is the same as the first physical address, the first read data read and buffered through the memory interface 158 based on the first read command may be transferred to the second host 110b through the host interface 152.

By way of example and not limitation, fig. 5A and 6A illustrate that each host transmits a read lock request or a write lock request to memory system 130, and then transmits a read command or a write command to memory system 130 after receiving a lock set signal from memory system 130 indicating that a lock has been set. In other embodiments, the host may transmit a read command or a write command to the memory system 130 regardless of the lock set signal, so long as the priority queues are managed in an orderly manner.

In an embodiment, each host may transmit a write command including a write lock request to memory system 130. Similarly, each host may communicate a read command including a read lock request to the memory system 130.

For example, when the host transmits a write command to the memory system 130 along with FTL metadata and write data, the memory system 130 may check the lock status. When the lock is not set at that time, the memory system 130 may set a write lock and perform a write operation. The memory system 130 may check the lock status in response to the write command and transmit a write lock failure signal to the host when there is a lock set by another host.

FIG. 7 schematically illustrates data that may be included in a memory device and multiple hosts in accordance with some embodiments.

In the computing environment 100, the first host 110a and the second host 110b may communicate with the memory system 130 and use the memory system 130. The first host 110a and the second host 110b may include file systems 112a and 112b, FTLs 114a and 114b, memories 116a and 116b, and device drivers 118a and 118b, respectively. Memory system 130 may include a controller 150 and a memory device 170. Controller 150 may include a host interface 152, a processor 154, a memory 156, a memory interface 158, a lock manager 160, and a synchronization manager 162.

Memory device 170 may include a plurality of memory regions. For example, a memory region may include one or more blocks or one or more pages.

In an embodiment, lock manager 160 may set or release locks for each storage region of memory system 130. The memory 156 may store read lock states and write lock states for respective memory regions. FIG. 7 shows an example of a memory 156 for storing a table including an index field indicating a storage region and a lock status field indicating a read lock status and a write lock status of the respective storage region.

The synchronization manager 162 may synchronize FTL metadata of the memory system 130. In an embodiment, the sync manager 162 may correspond to the sync manager 162 shown in fig. 3. For example, when one storage area corresponds to one block, the first host 110a may generate first update FTL metadata to write first write data to the memory system 130. When the physical address included in the first update FTL metadata is an address within the first block, the first host 110a can transmit a first write lock request for accessing the first block of the memory device 170.

When no read lock and write lock are set in the first block, lock manager 160 may set a write lock in response to the first write lock request.

In some embodiments, the FTL metadata of the first block is not updated even in case the second host sets a write lock on the second block. Thus, when lock manager 160 determines that no lock is set on the first block of memory device 170, lock manager 160 may set a write lock of the first host on the first block based on the first write lock request.

After the write operation to the second block is successfully completed, the write lock is released and the synchronization manager 162 updates the version value of the FTL metadata and transmits a write lock release signal to the plurality of hosts. In response to the write lock release signal, the first and second hosts 110a and 110b may determine whether the version values of the FTL metadata stored in the first and second memories 116a and 116b are consistent with the version values of the FTL metadata stored in the memory 156 and synchronize the FTL metadata.

FIG. 8 schematically illustrates an example of a computing environment 100 in communication with a host for an open channel memory system that supports single root I/O virtualization (SR-IOV).

In an embodiment, memory system 130 may be an open channel memory system that supports SR-IOV.

Memory system 130 may include a controller 150 and a memory device 170. Controller 150 may include a host interface 152, a processor 154, a memory 156, a memory interface 158, a lock manager 160, and a synchronization manager 162.

One or more hosts 110 may communicate with memory system 130 and use memory system 130. Host 110 may use multiple virtual machines. Because memory system 130 supports SR-IOV, multiple virtual machines within host 110 can independently communicate commands to memory system 130 and communicate data with memory system 130, even though host 110 communicates through a single physical port.

Since the memory system 130 is an open channel memory system, multiple virtual machines can independently perform FTL operations to access the memory system 130 using physical addresses. In some embodiments, each of the multiple virtual machines may use its own file system, FTL, and device driver.

In some embodiments, host 110 may operate first virtual machine 810a and second virtual machine 810 b. First virtual machine 810a and second virtual machine 810b may be loaded into a memory (not shown) of host 110 and a processor (not shown) within host 110.

A first virtual machine 810a may perform operations associated with a first file system 812a, a first FTL 814a, and a first device driver 818a, and a second virtual machine 810b may perform operations associated with a second file system 812b, a second FTL 814b, and a second device driver 818 b.

First virtual machine 810a and second virtual machine 810b may perform operations provided by first FTL 814a and second FTL 814b to directly translate logical addresses of a file system to physical addresses, respectively, to access memory system 130. A memory (not shown) within host 110 may store operational data for first FTL 814a and second FTL 814 b. For ease of description, fig. 8 illustrates that a first memory 816a stores FTL metadata of a first FTL 814a and a second memory 816b stores FTL metadata of a second FTL 814 b. In other embodiments, the first memory 816a and the second memory 816b may be the same memory device.

Since multiple virtual machines independently write data to the memory system 130 and update internal FTL metadata, the virtual machines may cause errors when the internal FTL metadata of the respective virtual machines are not synchronized with each other. For example, the virtual machine may cause an error by reading undesired data from the wrong address or by performing a write operation to a storage area to which data has been written.

In an embodiment, when a write lock request is received for a first virtual machine 810a among the plurality of virtual machines, the memory system 130 may set a write lock based on the lock state of the memory device 170, receive a write command, write data, and updated FTL metadata from the first virtual machine 810a, and perform a write operation.

When the write operation is complete, the memory system 130 may release the write lock and synchronize the FTL metadata of the multiple virtual machines and the memory system 130. In this way, the memory system 130 can prevent the virtual machine from reading undesired data from an erroneous address or performing a write operation on a storage area to which data has been written. Thus, the reliability of the computing environment 100 may be improved.

Fig. 9 schematically illustrates a user system implemented based on some embodiments of the disclosed technology.

Referring to fig. 9, the user system 6900 may include user interfaces 6910a and 6910b, memory modules 6920a and 6920b, application processors 6930a and 6930b, network modules 6940a and 6940b, and a storage module 6950.

The memory module 6950 can store data, such as data received from the application processors 6930a and 6930b, and can then transfer the stored data to the application processors 6930a and 6930 b. The memory module 6950 may be implemented by a nonvolatile semiconductor memory device such as a phase change ram (pram), a magnetic ram (mram), a resistive ram (reram), a NAND flash memory, a NOR flash memory, and a 3D NAND flash memory, and may be provided as a removable storage medium such as a memory card or an external drive of the user system 6900.

The memory module 6950 may correspond to the memory system 130 described with reference to fig. 1-8. The memory module 6950 implemented in accordance with some embodiments may set a write lock based on whether a lock is set in the memory module 6950, perform a write operation using a write command, write data, and updated FTL metadata received from the application processor 6930a in a write lock state when a first write lock request is received from the application processor 6930a, release the write lock when the write operation is completed, and control the plurality of application processors 6930a and 6930b to synchronize with the internal FTL metadata of the memory module 6950.

The plurality of application processors 6930a and 6930b may correspond to the first host 110a and the second host 110b described with reference to fig. 1 to 8. More specifically, the application processors 6930a and 6930b can execute instructions associated with components included in the user system 6900, such as an Operating System (OS), and include controllers, interfaces, and graphics engines that control the components included in the user system 6900. The application processors 6930a and 6930b may be configured as a system on chip (SoC).

Memory modules 6920a and 6920b can serve as main memory, working memory, buffer memory, or cache memory for user system 6900. Memory modules 6920a and 6920b may include volatile Random Access Memory (RAM) such as Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate (DDR) SDRAM, DDR2SDRAM, DDR3 SDRAM, LPDDR SDRAM, LPDDR2SDRAM, or LPDDR3 SDRAM, or non-volatile RAM such as phase change RAM (PRAM), resistive RAM (ReRAM), Magnetoresistive RAM (MRAM), or Ferroelectric RAM (FRAM). For example, the application processor 6930a and the memory module 6920a, and the application processor 6930b and the memory module 6920b may be packaged and installed based on package (PoP).

Network modules 6940a and 6940b may communicate with external devices. For example, network modules 6940a and 6940b may support wired or wireless communication. Examples of wireless communication schemes include Code Division Multiple Access (CDMA), global system for mobile communications (GSM), wideband CDMA (wcdma), CDMA-2000, Time Division Multiple Access (TDMA), Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), Wireless Local Area Network (WLAN), Ultra Wideband (UWB), bluetooth, wireless display (WI-DI), to communicate with wired/wireless electronic devices or, in particular, mobile electronic devices. Accordingly, the memory system, the host system, and the computing environment based on the embodiments of the disclosed technology may be applied to wired/wireless electronic devices. The network module 6940a can be included in the application processor 6930a and the network module 6940b can be included in the application processor 6930 b.

The user interface 6910a may comprise an interface for communicating data and/or commands between the application processor 6930a and external devices. The user interface 6910b may include an interface for communicating data and/or commands between the application processor 6930b and external devices. For example, the user interface 6910 may include user input interfaces such as a keyboard, keypad, buttons, touch panel, touch screen, touch pad, touch ball, camera, microphone, gyro sensor, vibration sensor, and piezoelectric element, and user output interfaces such as a Liquid Crystal Display (LCD), Organic Light Emitting Diode (OLED) display device, active matrix OLED (amoled) display device, LED, speaker, and monitor.

In some embodiments of the disclosed technology, a memory system may include: a plurality of storage regions, each storage region accessible by a plurality of hosts; a memory storing executable instructions for adapting a file system to constraints limited by one or more storage regions; and a processor in communication with the memory to read executable instructions from the memory to receive a request for an operation from one of the hosts that is to access at least one of the plurality of storage areas; when determining that at least one storage area to be accessed is not locked currently, setting the locking of the at least one storage area to be accessed; updating file system metadata to match file system metadata used by one of the hosts accessing the at least one storage region; and updating a version value of file system metadata associated with the at least one storage region.

The executable instructions further include instructions that cause the processor to: queuing another one of the hosts requesting an operation on the at least one storage region being accessed by the one of the hosts. The executable instructions further include instructions that cause the processor to: queuing another one of the hosts to send a lock release signal to the other one of the hosts when the one of the hosts completes the operation.

The executable instructions further include instructions that cause the processor to: queuing another one of the hosts to notify the other one of the hosts of the updated file system metadata and the updated version value, the other one of the hosts requesting an operation on at least one storage region being accessed by the one of the hosts. The executable instructions further include instructions that cause the processor to: another one of the hosts is queued to send a current version value of the file system metadata upon receiving a request for an operation from one of the hosts.

The lock includes at least one of a write lock and a read lock, the write lock and the read lock instructing the at least one storage region to perform a write operation and a read operation, respectively. Here, the file system metadata includes metadata used by a Flash Translation Layer (FTL). The file system metadata includes: an address mapping table storing a mapping between physical addresses and logical addresses associated with the requested operation. The file system metadata further includes: block status information indicating whether at least one storage area is available for a write operation. The file system metadata used by one of the hosts accessing the at least one storage region includes an address mapping table for performing the requested operation.

In some embodiments of the disclosed technology, a memory system may include: a plurality of storage regions, each storage region accessible by a plurality of hosts; a memory storing executable instructions for adapting a file system to constraints limited by one or more storage regions; and a processor in communication with the memory to read the executable instructions from the memory to receive a request from one of the hosts to access at least one of the plurality of storage areas, the request including a lock request for the at least one of the plurality of storage areas; determining whether a conflict exists between the lock request and another lock currently set by another one of the hosts; and upon determining that there is no conflict, setting a lock for at least one storage region.

Here, the determining whether there is a conflict includes: a determination is made as to whether both a lock request by one of the hosts and another lock currently set by another one of the hosts are for a write operation to at least one of the plurality of storage regions. In this case, the executable instructions further include instructions that cause the processor to: upon determining that both a lock request by one of the hosts and another lock currently set by another one of the hosts are for a write operation, queuing the write operation requested by the one of the hosts to prevent the one of the hosts from accessing the at least one storage region. The executable instructions further include instructions that cause the processor to: upon completion of an operation associated with another lock set by another one of the hosts, the file system metadata is updated to match file system metadata used by the other one of the hosts and a version value of the file system metadata associated with the other lock set by the other one of the hosts is updated.

The executable instructions further include instructions that cause the processor to: notifying one of the hosts that requests an operation on at least one storage region that another one of the hosts is accessing, of the updated file system metadata and the updated version value. The executable instructions further include instructions that cause the processor to: an instruction of a lock release signal is transmitted to one of the hosts upon completion of an operation associated with another lock set by another one of the hosts.

Further, determining whether a conflict exists includes: a read operation is determined whether a lock request by one of the hosts and another lock currently set by another one of the hosts are both for at least one of the plurality of storage regions. In this case, the executable instructions further include instructions that cause the processor to: one of the hosts is caused to access at least one storage area.

The file system metadata includes metadata used by a Flash Translation Layer (FTL). The file system metadata includes: an address mapping table storing a mapping between physical addresses and logical addresses associated with the requested operation.

In embodiments of the disclosed technology, a computing environment may be provided that is capable of supporting synchronization of FTL metadata included in a plurality of hosts in communication with a memory system.

Although various embodiments have been described for illustrative purposes, it will be apparent to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.

37页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:主机自适应存储器装置优化

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!