Dual-port NVMe controller and control method

文档序号:68088 发布日期:2021-10-01 浏览:63次 中文

阅读说明:本技术 一种双端口NVMe控制器及控制方法 (Dual-port NVMe controller and control method ) 是由 张泽 王祎磊 于 2021-07-02 设计创作,主要内容包括:本申请提供了一种双端口NVMe控制器及控制方法,双端口NVMe控制器包括第一主机接口和第二主机接口,分别与第一主机和第二主机连接,分别用于接收第一主机发送的第一NVMe命令以及第二主机发送的第二NVMe命令;主机命令处理单元,包括第一主机命令处理支路和第二主机命令处理支路;其中,第一主机命令处理支路用于处理从第一主机接口所接收的第一NVMe命令;第二主机命令处理支路用于处理从第二主机接口所接收的第二NVMe命令;以及至少一个共享存储器,共享存储器用于存储第一NVMe命令和第二NVMe命令。本申请的技术方案能够实现适用于双端口模式,并且能够避免冲突。(The application provides a dual-port NVMe controller and a control method, wherein the dual-port NVMe controller comprises a first host interface and a second host interface, is respectively connected with a first host and a second host, and is respectively used for receiving a first NVMe command sent by the first host and a second NVMe command sent by the second host; the host command processing unit comprises a first host command processing branch and a second host command processing branch; the first host command processing branch is used for processing a first NVMe command received from the first host interface; the second host command processing branch is used for processing a second NVMe command received from the second host interface; and at least one shared memory for storing the first NVMe commands and the second NVMe commands. The technical scheme of the application can be suitable for a dual-port mode, and can avoid conflict.)

1. A dual port NVMe controller, comprising:

the first host interface and the second host interface are respectively connected with the first host and the second host and are respectively used for receiving a first NVMe command sent by the first host and a second NVMe command sent by the second host;

the host command processing unit comprises a first host command processing branch and a second host command processing branch; the first host command processing branch is used for processing a first NVMe command received from the first host interface; the second host command processing branch is used for processing a second NVMe command received from the second host interface; and

at least one shared memory to store the first NVMe command and the second NVMe command.

2. The NVMe controller of claim 1,

the shared memory comprises a first shared memory:

the first shared memory is connected with the first host command processing branch and the second host command processing branch and used for storing the first NVMe command and the second NVMe command.

3. The NVMe controller according to claim 1 or 2, further comprising a shutdown control unit for:

in response to the notification that the first host interface is closed, closing the first host interface and the first host command processing branch; and

in response to the notification that the second host interface is closed, the second host interface and the second host command processing branch are closed.

4. The NVMe controller of claim 3, wherein the shutdown control unit is further to:

in response to the notification that the first host interface is closed, releasing the storage space occupied by the first NVMe command in the shared memory; and

and in response to the notification that the second host interface is closed, releasing the storage space occupied by the second NVMe command in the shared memory.

5. The NVMe controller of any one of claims 1-4,

the first host command processing branch includes a first SGL and/or PRP unit, a first write initiation circuit, and a first DMA transfer circuit:

the first SGL and/or PRP unit is used for responding to the received first NVMe command, acquiring an SGL and/or PRP corresponding to the first NVMe command, generating one or more first DMA commands according to the SGL and/or PRP, and storing the one or more first DMA commands in a shared memory;

the first write initiating circuit responds to completion of storage of one or more first DMA commands corresponding to one first NVMe command, and sends a first DMA command index to the first DMA transmission circuit;

the first DMA transmission circuit acquires the one or more first DMA commands from the shared memory according to the first DMA command index, and moves data from the first host according to the acquired one or more first DMA commands.

6. The NVMe controller of any one of claims 1-5,

the second host command processing branch includes a second SGL and/or PRP unit, a second write initiation circuit, and a second DMA transfer circuit:

the second SGL and/or PRP unit is used for responding to the received second NVMe command, acquiring an SGL and/or PRP corresponding to the second NVMe command, generating one or more second DMA commands according to the SGL and/or PRP, and storing the one or more second DMA commands in the shared memory;

the second write initiating circuit sends a second DMA command index to the second DMA transmission circuit in response to completion of storage of one or more second DMA commands corresponding to one second NVMe command;

and the second DMA transmission circuit acquires the one or more second DMA commands from the shared memory according to the second DMA command index, and moves data from a second host according to the acquired one or more second DMA commands.

7. The NVMe controller of any one of claims 1-6,

the first host command processing branch comprises a first SGL and/or PRP unit and a first DMA transmission circuit; the second host command processing branch comprises a second SGL and/or PRP unit and a second DMA transmission circuit; also includes at least one read initiate circuit;

the first SGL and/or PRP unit is configured to acquire and parse the first NVMe command to obtain a corresponding SGL and/or PRP, generate one or more first DMA commands according to the SGL and/or PRP, and store the one or more first DMA commands in a shared memory;

the second SGL and/or PRP unit is configured to acquire and parse the second NVMe command to obtain a corresponding SGL and/or PRP, generate one or more second DMA commands according to the SGL and/or PRP, and store the one or more second DMA commands in a shared memory;

the read initiating circuit requests the back-end module to move data indicated by one or more first DMA commands or one or more second DMA commands from the NVM to the memory of the storage device; and in response to data of at least one first DMA command or at least one second DMA command being moved to a storage device memory, providing the first DMA command index to the first DMA transfer circuit or providing the second DMA command index to the second DMA transfer circuit;

the first DMA transmission circuit acquires at least one corresponding first DMA command from the shared memory according to the first DMA command index received from the read initiating circuit, and moves data to the first host according to the acquired at least one first DMA command;

and the second DMA transmission circuit acquires at least one corresponding second DMA command from the shared memory according to the second DMA command index received from the read initiating circuit, and moves the data to the second host according to the acquired at least one second DMA command.

8. The NVMe controller of claim 7,

the read initiate circuit includes a first read initiate circuit and a second read initiate circuit:

the first read initiating circuit is used for requesting the back-end module to move data indicated by one or more first DMA commands corresponding to the first NVMe command from the NVM to a memory of the storage device; and providing the first DMA command index to the first DMA transfer circuit in response to at least one first DMA command data being moved to the memory of the storage device;

the second read initiating circuit is used for requesting the back-end module to move the data indicated by one or more second DMA commands corresponding to the second NVMe command from the NVM to the memory of the storage device; and providing the second DMA command index to the second DMA transfer circuit in response to at least one second DMA command data being moved to the memory of the storage device.

9. The NVMe controller of claim 7 or 8,

when one of the first host interface and the second host interface is connected with the host, the first reading initiating circuit and the second reading initiating circuit jointly control the first DMA transmission circuit or the second DMA transmission circuit.

10. A control method of a dual-port NVMe controller, wherein the dual-port NVMe controller is used for connecting two hosts, comprises the following steps:

processing a first NVMe command from a first host through a first host interface and a first host command processing branch;

processing a second NVMe command from the second host through the second host interface and the second host command processing branch; and

storing the first and second NVMe commands via at least one shared memory.

Technical Field

The present application relates generally to the field of data processing. More particularly, the application relates to a dual-port NVMe controller and a control method.

Background

FIG. 1A illustrates a block diagram of a solid-state storage device. The solid-state storage device 102 is coupled to a host for providing storage capabilities to the host. The host and the solid-state storage device 102 may be coupled by various methods, including but not limited to, connecting the host and the solid-state storage device 102 by, for example, SATA (Serial Advanced Technology Attachment), SCSI (Small Computer System Interface), SAS (Serial Attached SCSI), IDE (Integrated Drive Electronics), USB (Universal Serial Bus), PCIE (Peripheral Component Interconnect Express, PCIE, high-speed Peripheral Component Interconnect), NVMe (NVM Express, high-speed nonvolatile storage), ethernet, fiber channel, wireless communication network, etc. The host may be an information processing device, such as a personal computer, tablet, server, portable computer, network switch, router, cellular telephone, personal digital assistant, etc., capable of communicating with the storage device in the manner described above. The storage device 102 (hereinafter, a solid-state storage device is simply referred to as a storage device) includes an interface 103, a control section 104, one or more NVM chips 105, and a DRAM (Dynamic Random Access Memory) 110.

The NVM chip 105 includes a NAND flash Memory, a phase change Memory, a FeRAM (Ferroelectric RAM), a MRAM (magnetoresistive Memory), a RRAM (Resistive Random Access Memory), and the like, which are common storage media.

The interface 103 may be adapted to exchange data with a host by means of, for example, SATA, IDE, USB, PCIE, NVMe, SAS, ethernet, fibre channel, etc.

The control unit 104 is used for controlling data transmission among the interface 103, the NVM chip 105, and the DRAM110, and also for memory management, host logical address to flash physical address mapping, erase balancing, bad block management, and the like. The control component 104 can be implemented in various manners of software, hardware, firmware, or a combination thereof, for example, the control component 104 can be in the form of an FPGA (Field-programmable gate array), an ASIC (Application-Specific Integrated Circuit), or a combination thereof. The control component 104 may also include a processor or controller in which software is executed to manipulate the hardware of the control component 104 to process IO (Input/Output) commands. The control unit 104 may also be coupled to the DRAM110 to access data of the DRAM 110. FTL tables and/or cached IO command data may be stored in the DRAM.

The control section 104 issues a command to the NVM chip 105 in a manner conforming to the interface protocol of the NVM chip 105 to operate the NVM chip 105, and receives a command execution result output from the NVM chip 105. Known NVM chip interface protocols include "Toggle", "ONFI", etc.

The memory Target (Target) is one or more Logic Units (LUNs) sharing a CE (Chip Enable) signal within the NAND flash package. One or more dies (Die) may be included within the NAND flash memory package. Typically, a logic cell corresponds to a single die. The logical unit may include a plurality of planes (planes). Multiple planes within a logical unit may be accessed in parallel, while multiple logical units within a NAND flash memory chip may execute commands and report status independently of each other.

Data is typically stored and read on a storage medium on a page-by-page basis. And data is erased in blocks. A block (also referred to as a physical block) contains a plurality of pages. A block contains a plurality of pages. Pages on the storage medium (referred to as physical pages) have a fixed size, e.g., 17664 bytes. Physical pages may also have other sizes.

In the storage device 102, mapping information from a logical address (LBA) to a physical address is maintained by an FTL (Flash Translation Layer). The logical addresses constitute the storage space of the solid-state storage device as perceived by upper-level software, such as an operating system. The physical address is an address for accessing a physical memory location of the solid-state memory device. Address mapping may also be implemented using an intermediate address modality in the related art. E.g. mapping the logical address to an intermediate address, which in turn is further mapped to a physical address. A table structure storing mapping information from logical addresses to physical addresses is called an FTL table. FTL tables are important metadata in storage devices. The data entry of the FTL table records the address mapping relationship in units of data units in the storage device.

Hosts access storage devices with IO commands that follow a storage protocol. The control component generates one or more media interface commands according to the IO commands from the host and provides the media interface commands to the media interface controller. The media interface controller generates storage media access commands (e.g., program commands, read commands, erase commands) that conform to the interface protocol of the NVM chip in accordance with the media interface commands. The control component also tracks that all media interface commands generated from one IO command are executed and indicates the processing result of the IO command to the host.

Referring to fig. 1B, the control means includes a host interface 1041, a host command processing unit 1042, a storage command processing unit 1043, a media interface controller 1044, and a storage media management unit 1045. The host interface 1041 acquires IO commands provided by the host. The host command processing unit 1042 generates a storage command according to the IO command and provides the storage command to the storage command processing unit 1043. The store command may access the same size of memory space, e.g., 4 KB. A data unit recorded in the NVM chip corresponding to data accessed by one storage command is referred to as a data frame. A physical page records one or more frames of data. For example, a physical page is 17664 bytes in size, and a data frame is 4KB in size, then one physical page can store 4 data frames.

The storage media management unit 1045 maintains a logical to physical address translation for each storage command. For example, the storage medium management unit 1045 includes an FTL table (FTL will be explained below). For a read command, the storage media management unit 1045 outputs a physical address corresponding to a logical address (LBA) accessed by the storage command. For a write command, the storage media management unit 1045 allocates an available physical address to it, and records a mapping relationship between a logical address (LBA) accessed by it and the allocated physical address. The storage medium management unit 1045 also maintains functions required to manage the NVM chips such as garbage collection, wear leveling, etc.

The storage command processing unit 1043 operates the media interface controller 1044 to issue a storage media access command to the NVM chip 105 according to the physical address provided by the storage media management unit 1045.

For clarity, commands sent by the host to the storage device 102 are referred to as IO commands, commands sent by the host command processing unit 1042 to the storage command processing unit 1043 are referred to as storage commands, commands sent by the storage command processing unit 1043 to the media interface controller 1044 are referred to as media interface commands, and commands sent by the media interface controller 1044 to the NVM chip 105 are referred to as storage media access commands. The storage medium access commands follow the interface protocol of the NVM chip.

In the NVMe protocol, after receiving a write command, the solid-state storage device 102 obtains data from the memory of the host through the host interface 1041, and then writes the data into the flash memory. For a read command, the solid state storage device 102 moves data to the host memory through the host interface 1041 after the data is read from the flash memory.

Data transferred between a host and a storage device is described in two ways: one is PRP (Physical Region Page) and the other is SGL (Scatter/Gather List). A PRP is a number of linked PRP entries, each of which is a 64-bit memory physical address that describes a physical Page (Page) space. The SGL is a linked list and consists of one or more SGL sections, and each SGL section consists of one or more SGL descriptors; each SGL descriptor describes the address and the length of the data cache, namely each SGL descriptor corresponds to a host memory address space; each SGL descriptor has a fixed size (e.g., 16 bytes).

Whether PRP or SGL, essentially describes one or more address spaces in host memory, where these address spaces are located arbitrarily in host memory. The host carries PRP or SGL related information in NVMe commands, telling the storage device where the data source is in the host memory, or where the data read from the flash memory should be put in the host memory.

Fig. 1C shows a basic structure of a host command processing unit 1042 of the prior art. In the prior art, when the host command processing unit 1042 processes an IO command, it needs to obtain a corresponding SGL or PRP from a host according to the IO command, and analyze the SGL or PRP to determine a corresponding host memory address. As shown in fig. 1C, the host command processing unit 1042 mainly includes a shared memory, a DMA module, and a sub-CPU system. The sub-CPU system comprises a plurality of CPUs, and the CPUs are used for running programs to process SGLs or PRPs and configuring DMA modules. The DMA module is used for processing the DMA command and implementing data transmission between the host and the storage device. A shared memory (share memory) is used to store data, NVMe commands, and the like.

Fig. 1D shows a basic configuration of a memory device from another perspective, the memory device including an interface (corresponding to the interface 103 in fig. 1B), a host command processing unit (corresponding to the host command processing unit 1042 in fig. 1B), a DRAM (corresponding to the DRAM110 in fig. 1B), a bus, and a back-end module. The host command processing unit, the back-end module and the DRAM interact through a bus. Which correspond to the storage command processing unit 1043 and the media interface controller 1044 in fig. 1B. Fig. 1D is used primarily to show that, for a host command processing unit that processes host commands, the parts other than the host commands can be collectively referred to as backend modules.

With the development of SSD technology, dual port technology began to emerge. As shown in fig. 2A, a dual-port application scenario is illustrated, where two host systems (a first host and a second host) respectively use different ports to access the same storage device, for example, the first host accesses the storage device through the PCIe interface and port 0, and the second host accesses the storage device through the PCIe interface and port 1. For a storage device, in dual port mode, it has the ability to interact with two hosts at the same time.

Disclosure of Invention

The application aims to realize data transmission between the hosts and the storage device in the dual-port mode of fig. 2A, and how the storage device processes NVMe commands received from the two hosts is an important link for data transmission between the storage device and the hosts.

By way of example, fig. 2B shows a schematic structural diagram of a dual-port NVMe controller. As shown in fig. 2B, the storage device includes a first host interface, a second host interface, a host command processing unit, a storage command processing unit, and a media interface controller. The processing by the host command processing unit processes both a first host command from the first host interface and a second host command from the second host interface. Although the scheme can realize the dual-port function, because two host interfaces simultaneously use the same host command processing unit, the IO of the two host interfaces can mutually preempt resources, and conflict is caused.

In order to realize a dual-port mode on the premise of avoiding conflict, the method provides additional hardware for the NVMe command processing of the second host, namely, a first host interface and a first host command processing branch are provided for the first host, a second host interface and a second host command processing branch are provided for the second host, and the hardware used by the first host command processing branch and the second host command processing branch are mutually independent; such that the first host command processing branch is dedicated to processing first NVMe commands of the first host and the second host command processing branch is dedicated to processing second NVMe commands of the second host. Therefore, the technical scheme of the application can realize the dual-port mode on the premise of avoiding conflict.

Furthermore, the two host command processing branches can share the shared memory, and if only one host is connected with the storage device, one set of hardware device which is not connected with the host can not work, so that hardware resources are saved.

Further, for NVMe write commands, the first host command processing branch and the second host command processing branch may share a CPU and a backend module. After the DMA transmission circuit moves the data of the write command from the host to the memory of the storage device, the DMA transmission circuit informs the CPU, and the CPU operates the back-end module to write the data into a flash memory (NVM). For example, the CPU can schedule and process with a data granularity of 4KB, so the CPU does not need to care which host NVMe command the processed data belongs to, and therefore even one larger NVMe command does not block other NVMe commands for a long time, so that the preemption effect between two hosts is reduced.

According to a first aspect of the present application, there is provided a first dual port NVMe controller according to the first aspect of the present application, comprising: the first host interface and the second host interface are respectively connected with the first host and the second host and are respectively used for receiving a first NVMe command sent by the first host and a second NVMe command sent by the second host; the host command processing unit comprises a first host command processing branch and a second host command processing branch; the first host command processing branch is used for processing a first NVMe command received from the first host interface; the second host command processing branch is used for processing a second NVMe command received from the second host interface; and at least one shared memory for storing the first NVMe commands and the second NVMe commands.

A first dual port NVMe controller according to the first aspect of the present application, there is provided a second dual port NVMe controller according to the first aspect of the present application, the shared memory comprising a first shared memory: the first shared memory is connected with the first host command processing branch and the second host command processing branch and used for storing the first NVMe command and the second NVMe command.

A first dual port NVMe controller according to the first aspect of the present application provides a third dual port NVMe controller according to the first aspect of the present application, the shared memory comprising a first shared memory and a second shared memory: the first shared memory is connected with the first host command processing branch, and the second shared memory is connected with the second host command processing branch; the first shared memory is used for storing the first NVMe command, and the second shared memory is used for storing the second NVMe command.

The first dual-port NVMe controller according to the first aspect of the present application provides the fourth dual-port NVMe controller according to the first aspect of the present application, further comprising a shutdown control unit for: in response to the notification that the first host interface is closed, closing the first host interface and the first host command processing branch; and in response to the notification that the second host interface is closed, closing the second host interface and the second host command processing branch.

The first dual-port NVMe controller according to the first aspect of the present application provides a fifth dual-port NVMe controller according to the first aspect of the present application, the shutdown control unit further being configured to: in response to the notification that the first host interface is closed, releasing the storage space occupied by the first NVMe command in the shared memory; and in response to the notification that the second host interface is closed, releasing the storage space occupied by the second NVMe command in the shared memory.

According to any one of the first to fifth dual port NVMe controllers of the first aspect of the present application, there is provided the sixth dual port NVMe controller of the first aspect of the present application, wherein the first host command processing branch includes a first SGL and/or PRP unit, a first write initiation circuit, and a first DMA transfer circuit: the first SGL and/or PRP unit is used for responding to the received first NVMe command, acquiring an SGL and/or PRP corresponding to the first NVMe command, generating one or more first DMA commands according to the SGL and/or PRP, and storing the one or more first DMA commands in a shared memory; the first write initiating circuit responds to completion of storage of one or more first DMA commands corresponding to one first NVMe command, and sends a first DMA command index to the first DMA transmission circuit; the first DMA transmission circuit acquires the one or more first DMA commands from the shared memory according to the first DMA command index, and moves data from the first host according to the acquired one or more first DMA commands.

According to a sixth dual port NVMe controller of the first aspect of the present application, there is provided a seventh dual port NVMe controller of the first aspect of the present application, the second host command processing branch comprises a second SGL and/or PRP unit, a second write initiation circuit and a second DMA transfer circuit: the second SGL and/or PRP unit is used for responding to the received second NVMe command, acquiring an SGL and/or PRP corresponding to the second NVMe command, generating one or more second DMA commands according to the SGL and/or PRP, and storing the one or more second DMA commands in the shared memory; the second write initiating circuit sends a second DMA command index to the second DMA transmission circuit in response to completion of storage of one or more second DMA commands corresponding to one second NVMe command; and the second DMA transmission circuit acquires the one or more second DMA commands from the shared memory according to the second DMA command index, and moves data from a second host according to the acquired one or more second DMA commands.

According to a seventh dual port NVMe controller of the first aspect of the present application, there is provided an eighth dual port NVMe controller of the first aspect of the present application, further comprising at least one processor module connecting the first DMA transfer circuit and the second DMA transfer circuit for: after the first DMA transfer circuit transfers the data indicated by the one or more first DMA commands to the memory of the storage device, and/or after the second DMA transfer circuit transfers the data indicated by the one or more second DMA commands to the memory of the storage device, the processor module controls the back-end module to write the corresponding data into the NVM.

According to a seventh dual-port NVMe controller of the first aspect of the present application, there is provided a ninth dual-port NVMe controller of the first aspect of the present application, wherein the first DMA transfer circuit and the second DMA transfer circuit are coupled to a memory of a storage device through a bus; the shutdown control unit is further configured to: responding to the notification of the first host interface closing, judging whether the first DMA transmission circuit is in data transmission, if so, controlling the bus to restart or abandon the current data transmission; and responding to the notice of the second host interface closing, judging whether the second DMA transmission circuit is in data transmission, and if so, controlling the bus to restart or abandon the current data transmission.

According to any one of the first to fifth dual-port NVMe controllers of the first aspect of the present application, there is provided a tenth dual-port NVMe controller of the first aspect of the present application, wherein the first host command processing branch includes a first SGL and/or PRP unit, and a first DMA transfer circuit; the second host command processing branch comprises a second SGL and/or PRP unit and a second DMA transmission circuit; also includes at least one read initiate circuit; the first SGL and/or PRP unit is configured to acquire and parse the first NVMe command to obtain a corresponding SGL and/or PRP, generate one or more first DMA commands according to the SGL and/or PRP, and store the one or more first DMA commands in a shared memory; the second SGL and/or PRP unit is configured to acquire and parse the second NVMe command to obtain a corresponding SGL and/or PRP, generate one or more second DMA commands according to the SGL and/or PRP, and store the one or more second DMA commands in a shared memory; the read initiating circuit requests the back-end module to move data indicated by one or more first DMA commands or one or more second DMA commands from the NVM to the memory of the storage device; and in response to data of at least one first DMA command or at least one second DMA command being moved to a storage device memory, providing the first DMA command index to the first DMA transfer circuit or providing the second DMA command index to the second DMA transfer circuit; the first DMA transmission circuit acquires at least one corresponding first DMA command from the shared memory according to the first DMA command index received from the read initiating circuit, and moves data to the first host according to the acquired at least one first DMA command; and the second DMA transmission circuit acquires at least one corresponding second DMA command from the shared memory according to the second DMA command index received from the read initiating circuit, and moves the data to the second host according to the acquired at least one second DMA command.

According to a tenth dual port NVMe controller of the first aspect of the present application, there is provided the eleventh dual port NVMe controller of the first aspect of the present application, the read initiate circuit comprising a first read initiate circuit and a second read initiate circuit: the first read initiating circuit is used for requesting the back-end module to move data indicated by one or more first DMA commands corresponding to the first NVMe command from the NVM to a memory of the storage device; and providing the first DMA command index to the first DMA transfer circuit in response to at least one first DMA command data being moved to the memory of the storage device; the second read initiating circuit is used for requesting the back-end module to move the data indicated by one or more second DMA commands corresponding to the second NVMe command from the NVM to the memory of the storage device; and providing the second DMA command index to the second DMA transfer circuit in response to at least one second DMA command data being moved to the memory of the storage device.

According to the eleventh dual-port NVMe controller of the first aspect of the present application, there is provided the twelfth dual-port NVMe controller of the first aspect of the present application, wherein when one of the first host interface and the second host interface is connected to the host, the first read initiation circuit and the second read initiation circuit control the first DMA transfer circuit or the second DMA transfer circuit together.

According to any one of the tenth to twelfth dual port NVMe controllers of the first aspect of the present application, there is provided the thirteenth dual port NVMe controller of the first aspect of the present application, wherein the first read initiation circuit and the second read initiation circuit each include a CPU.

According to the tenth dual-port NVMe controller of the first aspect of the present application, there is provided the fourteenth dual-port NVMe controller of the first aspect of the present application, wherein the first DMA transfer circuit and the second DMA transfer circuit are coupled to a memory of the storage device through a bus; the shutdown control unit is further configured to: responding to the notification of the first host interface closing, judging whether the first DMA transmission circuit is in data transmission, if so, controlling the bus to restart or abandon the current data transmission; and responding to the notice of the second host interface closing, judging whether the second DMA transmission circuit is in data transmission, and if so, controlling the bus to restart or abandon the current data transmission.

According to a second aspect of the present application, there is provided a control method of a first dual-port NVMe controller according to the second aspect of the present application, the dual-port NVMe controller being configured to connect two hosts, comprising: processing a first NVMe command from a first host through a first host interface and a first host command processing branch; processing a second NVMe command from the second host through the second host interface and the second host command processing branch; and storing the first and second NVMe commands via at least one shared memory.

According to a second aspect of the present application, there is provided a control method of a second dual-port NVMe controller, storing the first NVMe command and the NVMe command by one shared memory; or the first NVMe command and the NVMe command are respectively and correspondingly stored through two shared memories.

According to the control method of the first dual-port NVMe controller of the second aspect of the present application, there is provided a control method of the third dual-port NVMe controller of the second aspect of the present application, wherein the first host interface and the first host command processing branch are closed when a notification that the first host interface is closed is received; and when receiving the notice that the second host interface is closed, closing the second host interface and the second host command processing branch.

According to the control method of the third dual-port NVMe controller of the second aspect of the present application, there is provided a control method of the fourth dual-port NVMe controller of the second aspect of the present application, wherein when a notification that the first host interface is closed is received, a storage space occupied by the NVMe command of the first host in the shared memory is released; and when receiving the notification that the second host interface is closed, releasing the storage space occupied by the NVMe command of the second host in the shared memory.

According to a second aspect of the present application, there is provided a control method of a fifth dual-port NVMe controller, processing a first NVMe command from a first host through a first host interface and a first host command processing branch, the method including: in response to the received first NVMe command, obtaining an SGL and/or a PRP corresponding to the first NVMe command, generating one or more first DMA commands according to the SGL and/or the PRP, and storing the one or more first DMA commands in a shared memory; in response to completion of storage of one or more first DMA commands corresponding to one first NVMe command, sending a first DMA command index; and acquiring the one or more first DMA commands from the shared memory according to the first DMA command index, and moving data from the first host according to the acquired one or more first DMA commands.

According to a control method of a fifth dual-port NVMe controller of a second aspect of the present application, there is provided a control method of a sixth dual-port NVMe controller of the second aspect of the present application, processing a second NVMe command from a second host through a second host interface and a second host command processing branch, wherein the method comprises: in response to the received second NVMe command, obtaining an SGL and/or a PRP corresponding to the second NVMe command, generating one or more second DMA commands according to the SGL and/or the PRP, and storing the one or more second DMA commands in a shared memory; in response to completion of storage of one or more second DMA commands corresponding to one second NVMe command, sending a second DMA command index; and acquiring the one or more second DMA commands from the shared memory according to the second DMA command index, and moving data from the second host according to the acquired one or more second DMA commands.

According to a control method of a sixth two-port NVMe controller of the second aspect of the present application, there is provided a control method of a seventh two-port NVMe controller according to the second aspect of the present application, further comprising: after the data indicated by the one or more first DMA commands are moved to the memory of the storage device, controlling the back-end module to write the data into the NVM; and controlling the back-end module to write the data into the NVM after the data indicated by the one or more second DMA commands are moved to the memory of the storage device.

According to the control method of the sixth dual-port NVMe controller of the second aspect of the present application, there is provided the control method of the eighth dual-port NVMe controller of the second aspect of the present application, which is configured to determine whether a corresponding DMA transfer circuit is performing data transfer in response to a notification that the first host interface or the second host interface is closed, and if there is data transfer, control the bus to restart or abandon the current data transfer.

According to a control method of any one of the first to fourth dual-port NVMe controllers of the second aspect of the present application, there is provided a control method of a ninth dual-port NVMe controller of the second aspect of the present application, processing the first NVMe command by a first host command processing branch, including: acquiring and analyzing the first NVMe command to obtain a corresponding SGL and/or PRP, generating one or more first DMA commands according to the SGL and/or PRP, and storing the one or more first DMA commands in a shared memory; acquiring at least one corresponding first DMA command from the shared memory according to the first DMA command index, and moving data to the first host according to the acquired at least one DMA command; processing, by a second host command processing branch, the second NVMe command, comprising: acquiring and analyzing the second NVMe command to obtain a corresponding SGL and/or PRP, generating one or more second DMA commands according to the SGL and/or PRP, and storing the one or more second DMA commands in a shared memory; and acquiring at least one corresponding second DMA command from the shared memory according to the second DMA command index, and moving the data to the second host according to the acquired at least one second DMA command.

According to a control method of a ninth two-port NVMe controller of the second aspect of the present application, there is provided a control method of a tenth two-port NVMe controller according to the second aspect of the present application, further comprising: requesting a back-end module to move data indicated by one or more first DMA commands corresponding to the first NVMe command from the NVM to a memory of the storage device through a first read initiating circuit in the first host command processing branch; and in response to data of at least one first DMA command being moved to the storage device memory, sending a first DMA command index; requesting a back-end module to move data indicated by one or more second DMA commands corresponding to the second NVMe command from the NVM to a memory of the storage device through a second read initiating circuit in the second host command processing branch; and transmitting the second DMA command index in response to data of the at least one second DMA command being moved to the storage device memory.

According to a control method of a tenth dual-port NVMe controller of the second aspect of the present application, there is provided a control method of an eleventh dual-port NVMe controller of the second aspect of the present application, wherein the first read initiate circuit and the second read initiate circuit are the same circuit or two different circuits.

According to the second aspect of the present application, there is provided a control method of an eleventh dual-port NVMe controller, when one of a first host interface and a second host interface is connected to a host, the first DMA transfer circuit or the second DMA transfer circuit is controlled by the first read initiation circuit and the second read initiation circuit together.

According to the control method of the tenth dual-port NVMe controller of the second aspect of the present application, there is provided the control method of the thirteenth dual-port NVMe controller of the second aspect of the present application, wherein the first read initiation circuit and the second read initiation circuit both employ a CPU.

According to the method for controlling the ninth dual-port NVMe controller of the second aspect of the present application, there is provided a method for controlling the fourteenth dual-port NVMe controller of the second aspect of the present application, wherein in response to a notification that the first host interface or the second host interface is closed, it is determined whether a corresponding DMA transfer circuit is performing data transfer, and if there is data transfer, the bus is controlled to restart or abandon the current data transfer.

According to a third aspect of the present application, there is provided a storage device comprising any one of the first to fourteenth NVMe controllers of the first aspect of the present application.

According to a fourth aspect of the present application, there is provided an electronic device comprising any one of the first to fourteenth NVMe controllers of the first aspect of the present application.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.

FIG. 1A is a block diagram of a solid-state storage device of the prior art;

FIG. 1B is a schematic diagram of a control unit in the prior art;

FIG. 1C is a schematic diagram of a host command processing unit in the prior art;

FIG. 1D is a block diagram of another configuration of a memory device;

FIG. 2A is a schematic diagram of a dual-port mode application scenario;

FIG. 2B is a block diagram of a dual port NVMe controller;

fig. 3 is a block diagram of a dual-port NVMe controller according to an embodiment of the present application;

fig. 4 is a schematic diagram illustrating a dual-port NVMe controller processing a write command according to an embodiment of the present application;

fig. 5 is a schematic diagram illustrating a dual-port NVMe controller processing a read command according to an embodiment of the present application;

fig. 6 is a schematic diagram illustrating operation of another dual-port NVMe controller to process a read command according to an embodiment of the present application;

fig. 7 is a schematic diagram illustrating operation of a dual-port NVMe controller processing a read command according to an embodiment of the present application;

fig. 8 is a schematic diagram illustrating a dual-port NVMe controller processing a read command according to an embodiment of the present application;

fig. 9 is a flowchart of a control method of a dual-port NVMe controller according to an embodiment of the present application;

FIG. 10 is an exploded flowchart of step S102 of FIG. 9 (for a write command);

FIG. 11 is another exploded flowchart of step S102 of FIG. 9 (for a read command);

fig. 12 is still another exploded flowchart (suitable for a read command) of step S102 in fig. 9.

Detailed Description

A two-port NVMe controller is illustrated in fig. 3. For example, when two hosts are connected to the storage device, and both the hosts send NVMe commands to the storage device or one of the two hosts sends NVMe commands to the storage device, the NVMe controller shown in fig. 3 may be applied to execute processing on the NVMe commands sent by the hosts.

As shown in fig. 3, the NVMe controller includes a first host interface, a second host interface, a first host command processing branch, a second host command processing branch, and a shared memory. Besides, the NVMe controller also includes a storage command processing unit and a media interface controller. The first host interface is connected with the first host and used for receiving a first NVMe command sent by the first host, and the second host interface is connected with the second host and used for receiving a second NVMe command sent by the second host. The first host command processing branch is used for processing a first NVMe command received from the first host interface; the second host command processing branch is used for processing a second NVMe command received from the second host interface; the shared memory is used for storing the first NVMe command and the second NVMe command. For the sake of clarity, the first host command processing branch and the second host command processing branch are collectively referred to as a host command processing unit.

In one embodiment, the first host interface, the first host command processing branch, is hardware dedicated to the first host; the second host interface and the second host command processing branch are hardware special for the second host. The shared memory is a device for storing NVMe commands or information or commands related to NVMe commands. As an example, in order to save hardware resources, a shared memory may be provided in the storage device, and the first host command processing branch and the second host command processing branch may share the storage device.

In another embodiment, in order to avoid preemption of the shared memory resources by the first host command processing branch and the second host command processing branch, two shared memories may be used in the NVMe controller provided in the present application, for example, a first shared memory is allocated to the first host interface and the first host command processing branch, and a second shared memory is allocated to the second host interface and the second host command processing branch; and enabling the first shared memory to be exclusively used for storing first NVMe command related information sent by the first host, and enabling the second shared memory to be exclusively used for storing second NVMe command related information sent by the second host.

In a typical application scenario, a dual port NVMe controller serves two hosts, a first host and a second host as shown in fig. 3. In other application scenarios, the dual port NVMe controller may also serve a single host, e.g., one of the two hosts is disconnected from the storage device, or no NVMe commands are sent to the storage device, i.e., no information interaction between the host and the storage device. For example, the dual port NVMe controller shown in fig. 3 serves only the first host, at which time the dual port NVMe controller will generate a notification that the second host is down.

For example, the notification of the shutdown of the second host may be generated by detecting a PCIe interface signal corresponding to the second host to recognize that there is no information interaction between the second host and the storage device. A shutdown control unit (not shown in the figure, the shutdown control unit may be implemented by a CPU or hardware independent of the CPU) inside the dual-port NVMe controller, in response to the notification of shutdown of the second host, shutting down the dedicated hardware of the second host, where the dedicated hardware of the second host includes a second host interface and a second host command processing branch; or a second host interface, a second host command processing branch, and a second shared memory. In one embodiment, the shutdown control unit may further release the storage space occupied by the first NVMe command related information stored in the first shared memory in response to the notification that the first host interface is shutdown, or release the storage space occupied by the second NVMe command related information in the second shared memory in response to the notification that the second host interface is shutdown.

In yet another embodiment, the shutdown control unit may also release the storage space occupied by the first NVMe command in the shared memory in response to the notification that the first host interface is shutdown, or release the storage space occupied by the second NVMe command in the shared memory in response to the notification that the second host interface is shutdown.

When the host sends the IO command to the storage device, the IO command may be sent based on the NVMe protocol, and the IO command sent based on the NVMe protocol may also be generally referred to as the NVMe command, for example, the NVMe command includes a read command or a write command, and for the write command, it is used to instruct data migration from the host to the storage device, and the data migration from the host to the storage device includes two steps: data is moved from host memory to DRAM via DMA transfer circuitry, and from DRAM to NVM via a back-end module (media interface controller 1044, shown in fig. 1B).

By way of example, fig. 4 illustrates an operation diagram of a dual-port NVMe controller processing a write command.

As shown in fig. 4, the first host command processing branch includes a first SGL/PRP unit, a first write initiate circuit, and a first DMA transfer circuit. The second host command processing branch includes a second SGL/PRP unit, a second write initiation circuit, and a second DMA transfer circuit. By way of example, the structure of the first host command processing branch is similar to the structure of the second host command processing branch, and the first host command processing branch processes the first NVMe processing procedure similar to the second host command processing branch processes the second NVMe processing procedure. Based on this, the following description will be mainly given of the first host command processing branch structure, the functions of its respective components, and its process of processing the first NVMe (write command), while the following description will be briefly made of the second host command processing branch structure, the functions of its respective components, and its process of processing the second NVMe (write command). It should be noted that the first NVMe command/first write command/first read command, and the second NVMe command/second write command/second read command referred to hereinafter are used to identify the host to which the NVMe command/write command/read command corresponds, and the first and second numbers do not refer to the number of NVMe commands. Similarly, the first DMA command and the second DMA command are used to identify the host processing branch corresponding to the DMA command, and the first DMA command and the second DMA command do not refer to the number of the DMA commands.

As shown in fig. 4, the first SGL/PRP unit, in response to the received first NVMe command, acquires an SGL or a PRP corresponding to the first NVMe command, generates one or more first DMA commands according to the SGL or the PRP, and stores the one or more first DMA commands in the shared memory.

For example, the first SGL/PRP unit may be implemented by using a CPU, or may be implemented by using a hardware circuit independent from the CPU. How to extract SGL and PRP from NVMe commands and generate DMA commands from SGL and PRP can be done by means known in the art, and therefore only a brief explanation is given here.

The NVMe command includes a PRP field or an SGL field, which may be an SGL or a PRP itself, pointing to a host memory address space to be accessed, or may be a pointer pointing to an SGL or PRP linked list. Based on this, in one embodiment, the NVMe command carries an SGL or PRP, and the first SGL and/or PRP unit may directly acquire the SGL or PRP in response to receiving the NVMe command. In another embodiment, the NVMe command carries an SGL or PRP pointer, and the first SGL and/or PRP unit accesses the host and obtains the SGL or PRP from the host according to the SGL or PRP pointer in response to receiving the NVMe command.

In an application scenario, the first SGL/PRP unit may include an SGL unit and a PRP unit, and the SGL unit processes the SGL-related NVMe command and the PRP unit processes the PRP-related NVMe command, that is, the first SGL/PRP unit may process both the SGL-related NVMe command and the PRP-related NVMe command. In another application scenario, the first SGL/PRP unit may include an SGL unit or a PRP unit, i.e., the first SGL/PRP unit may process an SGL-related NVMe command or process a PRP-related NVMe command. The structure of the first SGL/PRP unit is not limited in this application.

As shown in FIG. 4, a first host transfers a first write command to a storage device through a first host interface, which transfers the first write command to a shared memory for storage, represented as process (1). The CPU or a hardware circuit independent of the CPU fetches the PRP/SGL field in the first write command from the shared memory and provides the first write command to the first SGL/PRP unit, which is represented as process (2). If the first write command carries the SGL, the first SGL/PRP unit caches the SGL in the first SGL/PRP cache unit, and if the first write command carries the SGL pointer, the first SGL/PRP unit obtains the SGL from the first host through the first host interface and caches the SGL in the first SGL/PRP cache unit, where the process is represented as a process (3); next, the first SGL/PRP unit generates one or more first DMA commands from the SGLs, storing the one or more first DMA commands in the shared memory, represented as process (4).

After the DMA command generation is complete, the first SGL/PRP unit notifies the first write initiator circuit, which is represented as process (5), to pass a first DMA command index, e.g., a DMA command pointer, to the first write initiator circuit indicating the location of the DMA command in the shared memory. The first DMA transfer circuit receives the first DMA command index and fetches one or more first DMA commands from the shared memory according to the first DMA command index, the process is represented as process (7-1), the first DMA transfer circuit performs a data transfer operation and transfers data from the first host to the memory of the storage device, and the process is represented as process (7-2).

When the data transfer instructed by one of the first DMA commands is finished, a notification of the end of the data transfer is generated, and this process is represented as a process (8). In process (5), the first write initiate circuit obtains the first write command ID in addition to the first DMA command index. Therefore, after a certain first DMA command is processed, corresponding information (for example, a first write command ID to which the certain first DMA command belongs is also included) is fed back to the first write-initiation circuit, and the first write-initiation circuit can thereby recognize which first write command corresponds to the first DMA command. For example, if a first write command includes 3 first DMA commands, denoted as 1#, 2#, and 3 #. When all of 1#, 2# and 3# are processed, the first write initiate circuit is notified accordingly. The first write control circuit can judge that all the 3 first DMA commands corresponding to the first write command are processed according to the first write command ID, and generates a notification of the completion of the execution of the first write command, which is indicated as a process (9). According to the NVMe protocol, this notification may be implemented by operating a CQ queue. While notifying the first host, the shared memory may free up space in the shared memory for the first write command and the first DMA commands (e.g., 1#, 2#, and 3#) corresponding to the first write command.

In the embodiment shown in fig. 4, after a first DMA command is completely stored, the first write initiator circuit can know that the first DMA command is completely stored, and at this time, the first SGL/PRP unit notifies the first write initiator circuit that a new first DMA command is written into the shared memory. In other embodiments, the first write initiate circuit may also be notified by other circuits by detecting whether there is a pending first DMA command to the shared memory. In addition, the first SGL/PRP cache unit is used for caching SGL or PRP, and in some embodiments, the first SGL/PRP cache unit may be omitted according to the processing speed of the SGL/PRP unit. In addition, the first SGL/PRP unit, the first write-in initiating circuit and the first DMA transmission circuit are all realized by hardware circuits independent of the CPU, so that the expense of the CPU can be reduced.

Regarding the second host and the second host command processing branch corresponding to the second host interface, the structure and the processing procedure of the second SGL/PRP unit, the second write initiation circuit and the second DMA transfer circuit included in the second host command processing branch are similar to those of the first host command processing branch, and therefore, the description is not repeated. It should be noted that the second host command processing branch is independent from the first host command processing branch, that is, the first host command processing branch is dedicated to process a first write command from the first host, the second host command processing branch is dedicated to process a second write command sent by the second host, the first host command processing branch does not process the second write command, and the second host command processing branch does not process the first write command.

In addition, the first host command processing branch and the second host command processing branch share a CPU and a back-end module, the CPU is connected with the first DMA transmission circuit and the second DMA transmission circuit, and is used for controlling the back-end module to write data into the NVM after the first DMA transmission circuit moves the data indicated by the first DMA command corresponding to the first write command to the DRAM; or after the second DMA transfer circuit transfers the data indicated by the second DMA command corresponding to the second write command to the DRAM, the back-end module is controlled to write the data into the NVM. The design of the shared CPU and the back-end module can save hardware resources. Moreover, for the first write command and the second write command, since the CPU can schedule and process the corresponding storage commands according to a fixed data granularity (e.g. 4KB), the CPU does not need to care which host write command the processed data belongs to, for example, even if a certain first write command from the first host is large, since the CPU does not process the second DMA command corresponding to the second write command after finishing processing the first DMA command corresponding to the first write command, for example, the CPU randomly obtains the DMA command from the shared memory according to the fixed data granularity to process, the second write command from the second host is not blocked for a long time, thereby reducing the delay of processing the NVMe command.

Furthermore, the first DMA transfer circuit and the second DMA transfer circuit are coupled to the DRAM through a bus (e.g., the bus shown in fig. 1D). When the first host interface is closed, the aforementioned close control unit is further configured to determine whether the first DMA transfer circuit is performing data transfer, and if there is data transfer, control the bus to restart or abandon the current data transfer. Similarly, when the second host interface is closed, the closing control unit is further configured to determine whether the second DMA transfer circuit is performing data transfer, and if there is data transfer, control the bus to restart or abandon the current data transfer.

The circuit and principle of the dual-port NVMe controller for processing the write command are described above, and the circuit and principle of the dual-port NVMe controller for processing the read command are described below. The read command is used for indicating data transfer from the storage device to the host, and the data transfer from the storage device to the host also comprises two steps: and reading the data from the NVM to the DRAM through the back-end module, and moving the data from the memory of the storage device to the memory of the host through the DMA transmission circuit.

Fig. 5 shows an operation diagram of a dual-port NVMe controller for processing a read command, where a first host command processing branch in fig. 5 includes a first host interface, a first SGL/PRP unit and a first DMA transfer circuit, and a second host command processing branch includes a second host interface, a second SGL/PRP unit and a second DMA transfer circuit. The structure of the circuit is similar to that shown in fig. 4 and will not be described in detail. By way of example, the first host command processing branch and the second host command processing branch may share a read control circuit based on characteristics of the read command. The following is a description of a process of the first host command processing branch processing a read command sent by the first host.

As shown in fig. 5, the first host transmits a first read command to the storage device through the first host interface, and the host interface transmits the first read command to the shared memory for storage, which is represented as process (1). The PRP/SGL field in the read command is extracted and the read command is provided to the first SGL/PRP unit, which is represented as process (2). If the first read command carries the SGL, caching the SGL in the first SGL/PRP cache unit, and if the first read command carries the SGL pointer, acquiring the SGL from the first host through the host interface and caching the SGL in the first SGL/PRP cache unit, where the process is represented as a process (3); next, one or more first DMA commands are generated from the SGL and stored in shared memory, represented as process (4). After the first DMA command generation is complete, the first SGL/PRP unit notifies the read initiate circuit, which is represented by process (5), which passes a first DMA command index, e.g., a DMA command pointer, to the read initiate circuit indicating the location of the first DMA command in shared memory.

The read initiate circuit receives the first DMA command index. At the same time, the read initiate circuit accesses the back-end module, requesting the back-end module to read the data indicated by the first DMA command from the NVM into the memory device memory (DRAM), which is represented as process (6). The read initiator circuit waits for the backend module to read out the data indicated by the first DMA command into a memory of the storage Device (DRAM), and when the data indicated by one or more first DMA commands is read out into the DRAM, the read initiator circuit can acquire the information, for example, the backend module may notify the read initiator circuit, or the read initiator circuit may acquire the information according to the state of the memory of the storage device, and the process is represented as process (7). The read initiate circuit may provide the first DMA command index to the first DMA transfer circuit in response to the reading out of the data indicated by one or more first DMA commands into the DRAM, which is represented by process (8). The first DMA transfer circuit fetches the corresponding one or more first DMA commands from the shared memory according to the first DMA command index, which is indicated as process (9), while the first DMA transfer circuit performs a data transfer operation to transfer data from the DRAM to the first host memory, which is indicated as process (10).

When the data transfer instructed by one DMA command is finished, a notification of the end of the data transfer is generated, and this process is represented as a process (11). In process (5), the read initiate circuit obtains a first read command ID in addition to the first DMA command index, for identifying the read command. In one embodiment, after a first DMA command is processed, corresponding information (for example, a read command ID to which the first DMA command belongs) is fed back to the read initiator circuit, and the read initiator circuit can thereby identify which first DMA command corresponds to the first DMA command. When it is judged that all of the first DMA commands corresponding to a certain first read command have been processed, a notification of completion of execution of the first read command is generated to notify the first host, which is indicated as a process (12). While notifying the first host, the shared memory may free up space in the shared memory for the first read command and a first DMA command corresponding to the first read command.

In the embodiment shown in fig. 5, after the DMA command is completely stored, the read initiator circuit can know that a certain DMA command is completely stored, and at this time, the first SGL/PRP unit notifies the read initiator circuit that a new DMA command is written into the shared memory. In other embodiments, the read initiate circuit may also be notified by other circuits by detecting the data storage status in the shared memory.

Similarly, the second SGL/PRP unit, after generating a second DMA command according to the second read command from the second host interface and storing the second DMA command in the shared memory, also notifies the read initiate circuit to provide the second DMA command index to the read initiate circuit, which is denoted as process (5'). The read initiate circuit receives the second DMA command index. At the same time, the read initiate circuit accesses the back-end module, requesting the back-end module to read the data indicated by the second DMA command from the NVM into the memory device memory (DRAM), which is denoted as process (6'). The read initiate circuit waits for the back-end module to read the data indicated by the second DMA command into a memory Device (DRAM), and when the data indicated by one or more second DMA commands is read into the DRAM, the read initiate circuit can learn the information, which is denoted as process (7'). The read initiate circuit provides the second DMA command index to the second DMA transfer circuit in response to the read out of the DRAM of data indicated by one or more second DMA commands, which is represented as process (8'). The second DMA transfer circuit fetches the corresponding one or more second DMA commands from the shared memory according to the second DMA command index, which is indicated as process (9 '), while the second DMA transfer circuit performs a data transfer operation to transfer data from the storage device memory to the second host memory, which is indicated as process (10').

When the data transfer instructed by one of the second DMA commands is finished, a notification of the end of the data transfer is generated, and this process is denoted as a process (11'). When the read control circuit determines that all DMA commands corresponding to the second read command have been processed, a notification of completion of execution of the second read command is generated to notify the second host, which is indicated as a process (12'). While notifying the second host, the shared memory may release the second read command and a space in the shared memory for a second DMA command corresponding to the second read command.

The embodiment of fig. 5 has the first host command processing branch and the second host command processing branch sharing a single read initiate circuit. The read initiate circuit may be implemented by hardware independent of the CPU, or may be implemented by the CPU. The read initiate circuit is preferably implemented with a CPU.

For example, the first host command processing branch and the second host command processing branch may not share one read initiator circuit, for example, the first host command processing branch includes a first read initiator circuit, and the second host command processing branch includes a second read initiator circuit.

Fig. 6 shows a circuit schematic of the first host command processing branch and the second host command processing branch using the first read initiate circuit and the second read initiate circuit, respectively. As shown in fig. 6, the first read initiate circuit is capable of interacting with the back-end module, including the first read initiate circuit accessing the back-end module, requesting the back-end module to process the data indicated by the first DMA command corresponding to the first read command, indicated as process (6), and the back-end module notifying the first read initiate circuit after reading the data out of the DRAM, indicated as process (7). The second read initiating circuit can also interact with the back-end module, and the method includes that the second read initiating circuit accesses the back-end module, requests the back-end module to process data indicated by a second DMA command corresponding to the second read command, and is represented as a process (6 '), and the back-end module informs the second read initiating circuit (7') after reading the data out of the DRAM. It is noted that first, the back-end module is selective in notifying the first read initiate circuit and the second read initiate circuit. That is, the back-end module notifies the first read initiate circuit when the data indicated by the first DMA command is read out, and notifies the second read initiate circuit when the data indicated by the second DMA command is read out. And secondly, the first host command processing branch and the second host command processing branch are mutually independent, and when a certain host interface is closed, the hardware corresponding to the host interface is closed, and the corresponding storage space is released. The embodiment shown in fig. 6 can better complete the data processing task when the first host and the second host are both online. However, in practical application scenarios, it often happens that a certain host is offline or has no information interaction with the storage device, that is, a certain host interface is closed, that is, the dual-port NVMe controller also operates in the single-port mode. In single port mode, the corresponding hardware may be shut down in the manner shown in FIG. 6.

In the single-port mode, for example, in order to improve the processing efficiency of the NVMe controller and avoid waste of hardware resources, in the single-port mode, a part of hardware resources of the corresponding host command processing branch may be turned off, and another part of hardware resources may be combined with another host command processing branch to process the NVMe command together. For example, when the first host is disconnected, the first host interface in the first host command processing branch corresponding to the first host is closed, and the first read initiator circuit in the first host command processing branch and the second read initiator circuit in the second host command processing branch simultaneously service the DMA command processing corresponding to the second NVMe command.

Fig. 7 shows an operation schematic diagram of another NVMe controller provided in an embodiment of the present application. The NVMe controller shown in fig. 7 differs from the NVMe controller shown in fig. 6 in that the second read initiate circuit in the second host command processing branch is coupled not only to the second host command processing branch, but also to the first SGL/PRP unit in the first host command processing branch, denoted (a), and to the first DMA transfer circuit, denoted (b). Under the condition that the first host and the second host are both online, the working process of the NVMe controller shown in fig. 7 is the same as that shown in fig. 6, and details are not repeated here. While when the first host is online, the operation process of the NVMe controller shown in fig. 7 is different from that shown in fig. 6, and is described in detail as follows:

when the second host interface is closed, the second host interface and the second SGL/PRP unit can be closed, but the second read initiating circuit is still running, and because of the existence of the coupling relationship ((a), (b)), the second read initiating circuit and the first read initiating circuit can both simultaneously serve to process the first DMA command corresponding to the first NVMe command, including responding to the notification that the first SGL/PRP unit completes storing the first DMA command, acquiring the index of the first DMA command, and after the back-end module reads the data indicated by the first DMA command to the DRAM, triggering the index of the first DMA command to be sent to the first DMA transfer circuit, and performing data transfer from the DRAM to the host by the first DMA transfer circuit. . When the dual-port NVMe controller is in a single-port mode, the response efficiency of the first NVMe command can be improved by adopting two read initiating circuits.

Fig. 8 shows an operation schematic diagram of another NVMe controller provided in an embodiment of the present application. As shown in fig. 8, the first read initiate circuit in the first host command processing branch is coupled not only to the first SGL/PRP unit and the first DMA transfer circuit in the first host command processing branch, but also to the second SGL/PRP unit in the second host command processing branch, denoted (c), and to the second DMA transfer circuit, denoted (d). By way of example, in the dual port mode, the NVMe controller shown in fig. 8 operates similarly to the NVMe controllers shown in fig. 6 and 7 described above. In the case where the first host is disconnected and the second host is connected online, the NVMe controller shown in fig. 8 operates differently from the NVMe controllers shown in fig. 6 and 7. Specifically, the working process is as follows:

the first host interface and the first SGL/PRP unit are closed, the second read initiating circuit and the first read initiating circuit can both simultaneously serve for processing a second DMA command corresponding to the second NVMe command, and the method includes the steps of responding to a notification that the second SGL/PRP unit completes storage of the second DMA command, acquiring an index of the second DMA command, and after the back-end module reads data indicated by the second DMA command out of the DRAM, triggering the second DMA command to be sent to the second DMA transmission circuit, and performing data transfer from the DRAM to the host by the second DMA transmission circuit. When the dual-port NVMe controller is in a single-port mode, the response efficiency of the second NVMe command can be improved by adopting two reading initiating circuits.

In addition, on the basis of the embodiments of fig. 7 and 8, in one embodiment, the dual-port NVMe controller can have the coupling relationships ((a), (b)) shown in fig. 7 and the coupling relationships ((c), (d)) shown in fig. 8 at the same time. In this embodiment, the first read initiate circuit of the first host command processing branch may be for the second host command processing branch, the second read initiate circuit of the second host command processing branch may be for the first host command processing branch, and the first host interface and the second host interface are identical.

According to one aspect of the application, the application also provides a control method of the dual-port NVMe controller. Fig. 9 shows a control method flow. Including steps S101, S102 and step S103. It should be noted that the order of the sequence numbers in steps S101, S102 and S103 does not represent the order of executing the steps, and the steps may be executed in a changed order or simultaneously.

The method as shown in fig. 9 includes: and S101, connecting the first host through a first host interface and connecting the second host through a second host interface. And step S102, processing the first NVMe command through the first host command processing branch circuit, and processing the second NVMe command through the second host command processing branch circuit. Step S103, storing the first NVMe command and the second NVMe command through the at least one shared memory.

According to the method provided by the embodiment of the application, the NVMe commands from the first host and the NVMe commands from the second host can be processed by two independent host command processing branches, so that the resource preemption can be reduced. Furthermore, energy consumption can be reduced by turning off the hardware corresponding to a certain host interface when the host interface is turned off.

Fig. 10, 11 and 12 show several different embodiments of step S102.

The method of fig. 10 includes: step S201 is executed, in response to the received first NVMe command, an SGL and/or a PRP corresponding to the first NVMe command is obtained, one or more first DMA commands are generated according to the SGL and/or the PRP, and the one or more first DMA commands are stored in the shared memory. And responding to the received second NVMe command, acquiring the SGL and/or PRP corresponding to the second NVMe command, generating one or more second DMA commands according to the SGL and/or PRP, and storing the one or more second DMA commands in the shared memory.

Then, step S202 is executed, and in response to that one or more first DMA command storage corresponding to one first NVMe command is completed, a first DMA command index is sent; and in response to completion of one or more second DMA command stores corresponding to one second NVMe command, sending a second DMA command index.

Finally, step S203 is executed, one or more first DMA commands are obtained from the shared memory according to the first DMA command index, and data is moved from the first host according to the obtained one or more first DMA commands. And acquiring one or more second DMA commands from the shared memory according to the second DMA command index, and moving data from the second host according to the acquired one or more second DMA commands.

The method shown in fig. 10 is applicable to the case where the NVMe command is a write command, and data indicated by the first NVMe command can be moved to a memory (e.g., DRAM) of the storage device.

The method of fig. 11 includes: executing step S301, in response to the first NVMe command, acquiring and parsing the first NVMe command to obtain a corresponding SGL and/or PRP, generating one or more first DMA commands according to the SGL and/or PRP, and storing the one or more first DMA commands in the shared memory; and responding to the second NVMe command, acquiring and analyzing the second NVMe command to obtain corresponding SGL and/or PRP, generating one or more second DMA commands according to the SGL and/or PRP, and storing the one or more second DMA commands in the shared memory.

Then, step S302 is executed, in which the read initiation circuit requests the back-end module to transfer data indicated by one or more first DMA commands corresponding to the first NVMe command from the NVM to a memory Device (DRAM). The read initiation circuit requests the back-end module to transfer data indicated by one or more second DMA commands corresponding to the second NVMe command from the NVM to a memory Device (DRAM).

Finally, step S303 is executed, in response to that the data of at least one first DMA command is moved to the memory of the storage device, the first DMA command index is sent, and the first DMA transfer circuit acquires and processes the first DMA command according to the first DMA command index. And responding to the data of at least one second DMA command to be moved to the memory of the storage device, sending a second DMA command index, and acquiring and processing the second DMA command by the second DMA transmission circuit according to the second DMA command index.

The method of fig. 12, including steps S401, S402 and S403, differs from the method of fig. 11 in that in step S402, mutually independent read-initiate circuits are respectively employed to request the back-end module, and in step S403, the mutually independent read-initiate circuits respectively respond to the data movement indicated by the first DMA command to the DRAM, and instruct the first DMA transfer circuit to process the first DMA command; and responding to the data movement indicated by the second DMA command to the DRAM, and indicating the second DMA transmission circuit to process the second DMA command.

To summarize, the methods of fig. 11 and 12 are applicable to the case where the NVMe command is a read command, and in the method of fig. 11, one read initiator circuit may be used to interact with the backend module, the first DMA transfer circuit, and the second DMA transfer circuit, while in fig. 12, for example, the first read initiator circuit may be used to interact with the backend module, the first DMA transfer circuit, and the second read initiator circuit may be used to interact with the backend module, the second DMA transfer circuit. The related method of the present embodiment can be understood with reference to fig. 5, 6, 7, and 8.

According to an aspect of the present application, embodiments of the present application further provide a memory device, which refers to the memory device 102 shown in fig. 1A and 1B, where the memory device 102 includes an interface 103, a control unit 104, one or more NVM chips 105, and a DRAM 110. The control component 104 includes the NVMe controller described in the above embodiments, and since the NVMe controller has been described in detail above, it will not be described in detail here.

According to an aspect of the present application, an electronic device is further provided, where the electronic device includes a processor and a storage device, and the storage device is the storage device mentioned in the above embodiments. Since it has been described in detail above, it will not be described in detail.

It is noted that for the sake of brevity, this application describes some methods and embodiments thereof as a series of acts and combinations thereof, but those skilled in the art will appreciate that the aspects of the application are not limited by the order of the acts described. Accordingly, one of ordinary skill in the art will appreciate that certain steps may be performed in other sequences or simultaneously, in accordance with the disclosure or teachings herein. Further, those skilled in the art will appreciate that the embodiments described herein are capable of alternative embodiments, i.e., acts or modules referred to herein are not necessarily required for the implementation of the solution or solutions described herein. In addition, the description of some embodiments of the present application is also focused on different schemes. In view of the above, those skilled in the art will understand that portions that are not described in detail in one embodiment of the present application may also be referred to in the related description of other embodiments.

In particular implementation, based on the disclosure and teachings of the present application, one of ordinary skill in the art will appreciate that the several embodiments disclosed in the present application may be implemented in other ways not disclosed herein. For example, as for the units in the foregoing embodiments of the electronic device or apparatus, the units are split based on the logic function, and there may be another splitting manner in the actual implementation. Also for example, multiple units or components may be combined or integrated with another system or some features or functions in a unit or component may be selectively disabled. The connections discussed above in connection with the figures may be direct or indirect couplings between the units or components in terms of connectivity between the different units or components. In some scenarios, the aforementioned direct or indirect coupling involves a communication connection utilizing an interface, where the communication interface may support electrical, optical, acoustic, magnetic, or other forms of signal transmission.

While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

29页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种多模式DMA数据传输系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!