method for reducing IO command conflict caused by lock and storage device

文档序号:1708464 发布日期:2019-12-13 浏览:6次 中文

阅读说明:本技术 降低锁引起的io命令冲突的方法与存储设备 (method for reducing IO command conflict caused by lock and storage device ) 是由 张志青 秦汉张 徐凯 于 2018-06-06 设计创作,主要内容包括:本申请公开了降低锁引起的IO命令冲突的方法与存储设备。所公开的IO命令处理方法包括以下步骤:获取第一IO命令;若同第一IO命令访问的地址对应的等待队列中有第二IO命令,从等待队列中获取并处理所述第二IO命令;将第一IO命令添加到所述等待队列。(the application discloses a method and storage equipment for reducing IO command conflict caused by locks. The disclosed IO command processing method includes the steps of: acquiring a first IO command; if a second IO command exists in a waiting queue corresponding to the address accessed by the first IO command, acquiring and processing the second IO command from the waiting queue; adding a first IO command to the wait queue.)

1. An IO command processing method is characterized by comprising the following steps:

Acquiring a first IO command;

If a second IO command exists in a waiting queue corresponding to the address accessed by the first IO command, acquiring and processing the second IO command from the waiting queue;

Adding a first IO command to the wait queue.

2. A command processing method according to claim 1, characterized in that if there is no second IO command to be processed in the wait queue, the first IO command is processed.

3. The command processing method of any one of claims 1-2, wherein if a second IO command is pending in the wait queue, the second IO command is prioritized for processing, and the first IO command is added to the tail of the wait queue.

4. A command processing method according to any one of claims 1 to 3, wherein, in the direction of address increment, the address space accessed by the IO command is divided into a plurality of regions, each region being mapped to one of a plurality of wait queues.

5. An IO command processing method is characterized by comprising the following steps:

Selecting an IO command source, and acquiring an IO command from the selected IO command source;

requesting a lock of the FTL table entry corresponding to the logical address according to the logical address accessed by the IO command;

If the request lock fails, judging whether the IO command comes from a waiting queue;

If the IO command comes from the waiting queue, the IO command is left at the head of the waiting queue.

6. A command processing method according to claim 5, characterized in that if a small logical address range is accessed collectively by IO commands in a short time, a higher priority is set for the wait queue.

7. the command processing method of claim 5, wherein if the IO command is not from a waiting queue, determining the waiting queue corresponding to the logical address according to the logical address accessed by the IO command, adding the IO command to a tail of the waiting queue, and selecting an IO command source again to obtain the IO command.

8. an IO command processing method is characterized by comprising the following steps:

Splitting an IO command to be processed into a plurality of subcommands;

If a second sub-command exists in a waiting queue corresponding to the address accessed by the first sub-command, acquiring and processing the second sub-command from the waiting queue;

Adding a first sub-command to the wait queue.

9. An IO command processing method according to claim 8, wherein a subcommand source is selected, and a subcommand is acquired from the selected subcommand source;

if the sub-command cannot be processed temporarily, and the sub-command comes from the wait queue, the sub-command is left at the head of the wait queue.

10. A storage device comprising a control section and a nonvolatile memory, the control section being configured to execute the IO command processing method according to claims 1 to 31.

Technical Field

the present application relates to storage devices, and more particularly, to reducing conflicts resulting from requesting locks in concurrent processing of multiple IO commands in a storage device.

Background

FIG. 1 is a block diagram of a solid-state memory device of the prior art, shown in FIG. 1, with a memory device 102 coupled to a host for providing storage capability to the host. The host and the storage device 102 may be coupled by a variety of means including, but not limited to, connecting the host and the storage device 102 by, for example, SATA, IDE, USB, PCIE, NVMe (NVM Express), SAS, ethernet, fibre channel, wireless communication network, etc. The host may be an information processing device, such as a personal computer, tablet, server, portable computer, network switch, router, cellular telephone, personal digital assistant, etc., capable of communicating with the storage device in the manner described above. The storage device 102 includes an interface 103, a control unit 104, one or more NVM (Non-Volatile Memory) chips 105 and optionally a firmware Memory 110. The interface 103 may be adapted to exchange data with a host by means such as SATA, IDE, USB, PCIE, NVMe, SAS, ethernet, fibre channel, etc. The control unit 104 is used to control data transfer between the interface 103, the NVM chip 105, and the firmware memory 110, and also used for memory management, host logical address to flash physical address mapping, erase leveling, bad block management, and the like. The control component 104 can be implemented in a variety of ways including software, hardware, firmware, or a combination thereof. The control unit 104 may be in the form of an FPGA (Field-programmable gate array), an ASIC (Application Specific Integrated Circuit), or a combination thereof. The control component 104 may also include a processor or controller. Control unit 104 loads firmware from firmware memory 110 at runtime. Firmware memory 110 may be NOR flash, ROM, EEPROM, or may be part of NVM chip 105. NAND flash, phase change memory, FeRAM, MRAM, etc. are common NVMs.

the memory Target (Target) is one or more Logic Units (LUNs) of a shared Chip Enable (CE) signal within the NAND flash package. One or more dies (Die) may be included within the NAND flash memory package. Typically, a logic cell corresponds to a single die. The logical unit may include a plurality of planes (planes). Multiple planes within a logical unit may be accessed in parallel, while multiple logical units within a NAND flash memory chip may execute commands and report status independently of each other. The meaning for target, logical Unit, LUN, Plane (Plane) is provided in "Open NAND Flash Interface Specification (Revision 3.0)" available from http:// www.micron.com// media/Documents/Products/Other% 20Documents/ON FI3_0gold.

Data is typically stored and read on a storage medium on a page-by-page basis. And data is erased in blocks. A block contains a plurality of pages. A block contains a plurality of pages. Pages on the storage medium (referred to as physical pages) have a fixed size, e.g., 17664 bytes. Physical pages may also have other sizes. The physical page may include a plurality of data segments having a specified size, e.g., 4096 or 4416 bytes.

In the solid-state storage device, mapping information from logical addresses to physical addresses is maintained using FTL (Flash Translation Layer). The logical addresses constitute the storage space of the solid-state storage device as perceived by upper-level software, such as an operating system. The physical address is an address for accessing a physical memory location of the solid-state memory device. Address mapping may also be implemented in the prior art using an intermediate address modality. E.g. mapping the logical address to an intermediate address, which in turn is further mapped to a physical address.

A table structure storing mapping information from logical addresses to physical addresses is called an FTL table. FTL tables are important metadata in solid state storage devices. Usually, the data entry of the FTL table records the address mapping relationship in the unit of data page in the solid-state storage device. The FTL table includes a plurality of FTL table entries (or table entries).

IO commands from the host include, for example, read commands and write commands. When processing a read command from the host, the solid-state storage device obtains a corresponding physical address from the FTL table by using a logical address carried in the read command, sends a read request to the NVM chip according to the physical address, and receives data output by the NVM chip in response to the read request. When processing a write command from a host, the solid-state storage device allocates a physical address to the write command, records a corresponding relation between a logical address of the write command and the allocated physical address in an FTL table, and sends a write request to an NVM chip according to the allocated physical address.

Disclosure of Invention

In a storage device, multiple IO commands are processed simultaneously, with almost every IO command requiring access to the FTL table. The use of a lock mechanism eliminates access conflicts for accessing two or more IO commands in the same FTL entry. However, in some cases, the processing time of some IO commands is significantly increased because the locks are not obtained for a long time due to frequent requests of the same lock, which adversely affects the user experience.

The embodiment of the application ensures that the IO command can be processed within a specified time, and improves the service quality of the storage device.

According to a first aspect of the present application, there is provided a first IO command processing method according to the first aspect of the present application, including the steps of: acquiring a first IO command; if a second IO command exists in a waiting queue corresponding to the address accessed by the first IO command, acquiring and processing the second IO command from the waiting queue; adding a first IO command to the wait queue.

According to the first IO command processing method of the first aspect of the present application, there is provided the second IO command processing method of the first aspect of the present application, wherein if there is no to-be-processed second IO command in the wait queue, the first IO command is processed.

According to a second IO command processing method of the first aspect of the present application, there is provided a third IO command processing method of the first aspect of the present application, where a lock of an FTL entry corresponding to a logical address accessed by a first IO command is requested according to the logical address accessed by the first IO command, and the first IO command is continuously processed in response to the lock being obtained.

According to the third IO command processing method of the first aspect of the present application, there is provided the fourth IO command processing method of the first aspect of the present application, wherein if the request lock fails, the first IO command is added to the tail of the wait queue.

According to the IO command processing method of the first aspect of the present application, there is provided the fifth IO command processing method of the first aspect of the present application, wherein if a second IO command to be processed is in the wait queue, the second IO command is preferentially processed, and the first IO command is added to the tail of the wait queue.

According to the IO command processing method of any one of the first to fifth aspects of the present application, there is provided a sixth IO command processing method of the first aspect of the present application, wherein to process the second IO command, a lock of a corresponding FTL entry is requested according to a logical address accessed by the second IO command.

according to a sixth IO command processing method of the first aspect of the present application, there is provided the seventh IO command processing method of the first aspect of the present application, wherein if a lock is obtained in response to a logical address accessed for the second IO command, the second IO command is removed from the wait queue.

According to a sixth IO command processing method of the first aspect of the present application, there is provided the eighth IO command processing method of the first aspect of the present application, wherein if a request lock for a logical address accessed by the second IO command fails, the second IO command is retained at a head of a wait queue.

According to the IO command processing method of any one of the first to eighth aspects of the present application, there is provided a ninth IO command processing method of the first aspect of the present application, wherein the second IO command is obtained from a head of one or more waiting queues.

according to the IO command processing method of any one of the first to ninth aspects of the present application, there is provided the tenth IO command processing method of the first aspect of the present application, wherein, in a direction in which an address increases, an address space accessed by an IO command is divided into a plurality of regions, and each region is mapped to one of the plurality of wait queues.

According to a tenth IO command processing method of the first aspect of the present application, there is provided the eleventh IO command processing method of the first aspect of the present application, wherein each of the logical address regions is mapped to one of the wait queues in turn.

According to a tenth or eleventh IO command processing method of the first aspect of the present application, there is provided the twelfth IO command processing method of the first aspect of the present application, wherein each of the logical address areas has a configurable size.

According to a tenth IO command processing method of the first aspect of the present application, there is provided the thirteenth IO command processing method of the first aspect of the present application, wherein the logical address space is divided into regions having the same number of waiting queues, and each region is mapped to one waiting queue.

According to a tenth IO command processing method of the first aspect of the present application, there is provided the fourteenth IO command processing method of the first aspect of the present application, wherein IO commands are mapped to one of the wait queues in turn regardless of a logical address range of the IO commands.

According to the IO command processing method of any one of the tenth to fourteenth aspects of the first aspect of the present application, there is provided the fifteenth IO command processing method of the first aspect of the present application, wherein the logical address regions are mapped to the respective wait queues in a random manner.

According to a second aspect of the present application, there is provided a first IO command processing method according to the second aspect of the present application, including the steps of: selecting an IO command source, and acquiring an IO command from the selected IO command source; requesting a lock of the FTL table entry corresponding to the logical address according to the logical address accessed by the IO command; if the request lock fails, judging whether the IO command comes from a waiting queue; if the IO command comes from the waiting queue, the IO command is left at the head of the waiting queue.

According to the first IO command processing method of the second aspect of the present application, there is provided the second IO command processing method of the second aspect of the present application, wherein each IO command source has a priority, and the IO command source is selected according to the priority.

according to the first or second IO command processing method of the second aspect of the present application, there is provided the third IO command processing method of the second aspect of the present application, wherein the wait queue has a higher priority.

According to the IO command processing method of any one of the first to third aspects of the present application, there is provided the fourth IO command processing method of the second aspect of the present application, wherein if the request lock is successful, the IO command is removed from the wait queue.

According to the IO command processing method of any one of the first to third aspects of the present application, there is provided a fifth IO command processing method of the second aspect of the present application, wherein in response to leaving the IO command at the head of the waiting queue, an IO command source is selected again to obtain the IO command.

According to the IO command processing method of the second aspect of the present application, there is provided the sixth IO command processing method of the second aspect of the present application, wherein if the request lock fails, the request lock for the IO command is repeatedly attempted until the lock is successfully requested, and the IO command is removed from the wait queue.

According to the IO command processing method of any one of the first to fifth aspects of the present application, there is provided the seventh IO command processing method of the second aspect of the present application, wherein when no IO command of the head of queue of the wait queue can be processed, the IO command is acquired from another source.

According to the IO command processing method of any one of the second to seventh aspects of the present application, there is provided the eighth IO command processing method of the second aspect of the present application, wherein the CPU processes M IO commands from other command sources every time the CPU processes N IO commands from the wait queue, where M and N are both integers, and N is greater than M.

According to the first IO command processing method of the second aspect of the present application, there is provided the ninth IO command processing method of the second aspect of the present application, wherein IO command sources are selected in turn or in turn according to weights.

According to the second IO command processing method of the second aspect of the present application, there is provided the tenth IO command processing method of the second aspect of the present application, wherein if IO commands in a short time collectively access a small logical address range, a higher priority is set for the wait queue.

According to a first IO command processing method of a second aspect of the present application, there is provided an eleventh IO command processing method of the second aspect of the present application, wherein if the IO command is not from a waiting queue, determining the waiting queue corresponding to the logical address according to the logical address accessed by the IO command, adding the IO command to a tail of the waiting queue, and selecting an IO command source again to obtain the IO command.

According to a third aspect of the present application, there is provided a first IO command processing method according to the third aspect of the present application, including the steps of: splitting an IO command to be processed into a plurality of subcommands; if a second sub-command exists in a waiting queue corresponding to the address accessed by the first sub-command, acquiring and processing the second sub-command from the waiting queue; adding a first sub-command to the wait queue.

according to the first IO command processing method of the third aspect of the present application, there is provided a second IO command processing method of the third aspect of the present application, wherein an address of each sub command corresponds to one entry of the FTL table.

According to the IO command processing method of any one of the first to second aspects of the present application, there is provided the third IO command processing method of the third aspect of the present application, wherein when all the subcommands of the IO command are processed, the IO command is processed.

According to the first IO command processing method of the third aspect of the present application, there is provided a fourth IO command processing method of the third aspect of the present application, wherein a subcommand source is selected, and a subcommand is obtained from the selected subcommand source; if the sub-command cannot be processed temporarily, and the sub-command comes from the wait queue, the sub-command is left at the head of the wait queue.

According to a fourth IO command processing method of the third aspect of the present application, there is provided the fifth IO command processing method of the third aspect of the present application, wherein if the sub command cannot be processed temporarily and the sub command is not from the wait queue, the sub command is added to a tail of the wait queue corresponding to an address accessed by the sub command.

According to a fourth aspect of the present application, there is provided the first storage device according to the fourth aspect of the present application, wherein the first storage device includes a control unit and a nonvolatile memory, and the control unit is configured to execute the IO command processing method according to the first to third aspects of the present application.

Drawings

in order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.

FIG. 1 is a schematic diagram of the internal structure of a prior art memory device provided in the present application;

FIG. 2 is a schematic diagram of an internal structure of a control unit of a storage device according to an embodiment of the present application;

FIG. 3 is a diagram illustrating mapping of logical addresses accessed by IO commands to a wait queue according to an embodiment of the present disclosure;

FIG. 4 is a schematic diagram illustrating an IO command processing according to an embodiment of the present disclosure;

FIG. 5 is a flowchart illustrating an embodiment of an IO command processing method;

FIG. 6 is a flowchart illustrating the processing of an IO command according to yet another embodiment of the present application;

102 — a storage device; 103-interface; 104-a control component;

105-a non-volatile memory; 110 — external memory; 210 — host interface;

220-media interface; 230-command distribution unit; 240-CPU;

250-a CPU; 260-FTL table; 270-FTL table.

Detailed Description

The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

FIG. 2 illustrates a block diagram of a control component of the storage device. Control component 104 includes a host interface 210, a command distribution unit 230, a plurality of CPUs (e.g., CPU 240 and CPU 250) for processing IO commands, and a media interface 220 for accessing NVM chip 105.

The host interface 210 is used to exchange commands and data with a host. In one example, the host and the storage device communicate via NVMe/PCIe protocol, and the host interface 210 processes the PCIe protocol data packet, extracts the NVMe protocol command, and returns a processing result of the NVMe protocol command to the host.

The command assigning unit 230 is coupled to the host interface 210, receives an IO command transmitted from the host to the storage device, and assigns the IO command to one of the CPUs for processing the IO command. The command distribution unit 230 may be implemented by a CPU or dedicated hardware. The control component 104 is also coupled to an external memory (e.g., DRAM) 110. A portion of the memory 110 is used to store FTL tables (e.g., FTL table 260 and FTL table 270). In the example of fig. 2, the complete logical address space provided by the storage device is divided into multiple portions, each portion being provided by one of the FTL tables (FTL table 260 or FTL table 270). Alternatively, a single FTL table is used to map the complete logical address space of the storage device.

Optionally, the FTL table corresponds to the CPU processing the IO command one to one. For example, FTL table 260 is accessed only by CPU 240, and FTL table 270 is accessed only by CPU 250. The command allocating unit 230 sends the IO command to the CPU that manages the FTL table including the logical address accessed by the IO command according to the logical address accessed by the IO command.

Still alternatively, either of CPU 240 and CPU 250 may access the complete FTL table.

It will be appreciated that one or more number of CPUs may be provided to process IO commands.

For a write command, the data to be written is transferred from the host to the NVM chip 105 through the host interface 210, under the direction of, for example, the CPU 240. The CPU 240 assigns a physical address for the NVM chip 105 for the write command, and forms and records the logical address of the write command and the assigned physical address into an FTL table entry.

For a read command, under the direction of the CPU 240, for example, the FTL table is accessed according to the logical address of the read command to obtain the physical address corresponding to the logical address, and data is read from the NVM chip 105 according to the physical address and transmitted to the host through the host interface 210.

The CPU may take up some time to process each IO command and use some of the resources of the storage device (e.g., memory to store data, cache to record IO command context, and/or access FTL entries). Some links of the processing of the IO command (e.g., accessing the FTL table, accessing the NVM chip, transmitting data with the host, etc.) are executed in an asynchronous manner. During asynchronous processing of IO commands, the CPU processes other IO commands, thereby concurrently processing multiple IO commands even with a single CPU.

IO commands that have already started processing but have not yet finished processing in the solid-state storage device are referred to as IO commands in processing. And the need to avoid conflicts resulting from IO commands in multiple processes accessing the same logical address.

According to an embodiment of the present application, a lock is provided for each entry of the FTL table to avoid conflicts resulting from IO commands in multiple processes accessing the same FTL entry. To access the FTL entry, a lock corresponding to the entry to be accessed is requested first. The lock indicates whether the FTL entry corresponding thereto has been used. For locks that have been used, the lock cannot be requested again until the lock is released, thereby ensuring that only one IO command is processed for an FTL entry at the same time. Alternatively, the lock for the FTL entry is requested only for processing write commands, while for read commands, the lock for the FTL entry is not required.

With continued reference to FIG. 2, CPU 240 is coupled to wait queue 1, while CPU 250 is coupled to wait queue 2. For example, after receiving the IO command from the command allocating unit 230, if the lock requesting the FTL entry for the IO command fails, the CPU 240 adds the IO command to the wait queue 1. And CPU 240 may continue to process other IO commands obtained from allocation unit 220. CPU 240 also obtains the IO command from wait queue 1 and requests a lock for the FTL entry again for the obtained IO command. If the lock requesting the FTL entry fails again for the IO command acquired from the wait queue 1, the IO command is still placed in the wait queue 1. If the lock of the FTL entry is requested successfully for the IO command acquired from the wait queue 1, the CPU 240 takes out the IO command from the wait queue and processes the IO command.

To prevent significant increase in processing time for an IO command in the wait queue due to repeated failure of the lock to request the FTL entry for the IO command, in an embodiment according to the present application, the CPU selects the IO command from the wait queue and the IO command from the command distribution unit 230 using an appropriate policy.

In one example, in the case where the wait queue has IO commands pending, the CPU preferentially acquires IO commands from the wait queue with respect to the command distribution unit 230. When the IO command obtained from the wait queue cannot get the lock of the FTL entry, the IO command from the command allocating unit 230 is processed again. At this time, if the IO command acquired from the command allocating unit 230 cannot obtain the lock of the FTL entry, the IO command is added to the tail of the waiting queue. And when the IO command is acquired from the waiting queue, the IO command is acquired from the head of the queue.

In yet another example, the CPU takes IO commands from the wait queue preferentially over the command distribution unit 230. When the wait queue has IO commands to process, the CPU processes M IO commands from the command distributing unit 230 every time it processes N IO commands from the wait queue, where M and N are both integers, and N is greater than M.

FIG. 3 is a diagram illustrating mapping of logical addresses accessed by IO commands to a wait queue according to an embodiment of the present application. The storage device exposes accessible logical addresses to the host. In fig. 3, in the direction in which the logical address (LBA) increases, the logical address space is divided into a plurality of areas (302, 304 … … 324), each of which is mapped to one of a plurality of waiting queues (waiting queue 1, waiting queue 2, waiting queue 3, and waiting queue 4).

Optionally, each logical address region is mapped to one of the waiting queues in turn. For example, region 302 is mapped to wait queue 1, region 304 is mapped to wait queue 2, region 306 is mapped to wait queue 3, and region 308 is mapped to wait queue 4. Next, wrap around occurs, mapping region 310 to wait queue 1, so that IO commands from the host are mapped to each wait queue as uniformly as possible. The size of each logical address region is configurable. For example, each logical address region is the same size as the logical address range indicated by each FTL entry, e.g., 4 KB.

it will be appreciated that there are other ways of partitioning the logical address space. For example, the logical address space is divided into the same number of regions as there are waiting queues, and each region is mapped to one waiting queue. Or mapping the IO commands to one of the wait queues in turn, regardless of the LBA range of the IO commands.

in fig. 3, areas having the same pattern are mapped to the same waiting queue, by way of example. Optionally, the logical address regions are mapped to the respective wait queues in a random manner.

The logical address regions are mapped to the waiting queues to reduce the probability that a plurality of IO commands accessing different logical addresses received in the near time are mapped to the same waiting queue. Therefore, the problem that the lock cannot be processed for a long time because the lock cannot be applied by the previous IO command when the subsequent IO command in the queue is waited is avoided. And because a plurality of IO commands in one waiting queue may access the same logical address, the IO commands are processed in sequence, and the corresponding lock is released after the previous IO command is processed, so that the subsequent IO command can easily obtain the lock, thereby reducing the probability of lock application failure, reducing IO command processing delay, and avoiding the waste of processing resources caused by frequent lock application.

FIG. 4 is a diagram illustrating an example of processing an IO command according to an embodiment of the present disclosure.

a CPU (e.g., CPU 240, see also fig. 2) obtains IO commands to be processed from a variety of sources. Various sources include a command distribution unit 230, one or more wait queues. The CPU selects one of the sources to obtain the IO command. By way of example, a priority is set for each source, and the CPU processes IO commands obtained from one of the sources according to the priority.

in one example, the wait queue has a higher priority relative to the command distribution unit 230. And when the IO commands are left to be processed in one or more waiting queues, the IO commands are obtained from the waiting queues and processed. Further, the IO command is acquired from the head of the wait queue. For the IO command obtained from the wait queue, a lock corresponding to the logical address accessed by the IO command is requested. If the request lock is successful, the IO command is removed from the wait queue and processed. If the request lock fails, the IO command is left in the waiting queue, and other sources are selected to obtain the IO command. In an alternative embodiment, if the request lock fails, the attempt to request the lock for the IO command is repeated until the lock is successfully requested, the IO command is removed from the wait queue, and the IO command is processed. When no IO command at the head of the waiting queue can be processed (the waiting queue is empty, or an IO command at the head of the non-empty waiting queue requests no lock), the IO command is acquired from the command allocating unit 230.

In yet another example, IO commands are fetched from various sources in turn. For the IO command acquired from the head of the waiting queue, a lock corresponding to the logical address accessed by the IO command is requested. If the request lock is successful, the IO command is removed from the wait queue and processed. If the request lock fails, the IO command is left in the wait queue. For an IO command obtained from the command assigning unit 230, a lock corresponding to a logical address to which it accesses is requested. If the request lock is successful, processing the IO command; if the request lock fails, the IO command is added to the tail of one of the waiting queues. The wait queue for holding the IO command is selected according to the mapping relationship between the logical address accessed by the IO command and the wait queue (see also fig. 3).

In yet another example, in the case that each source has IO commands to process, the CPU processes M IO commands from the command distribution unit 230 correspondingly every N IO commands from the wait queue, where M and N are both integers. The priority of the IO command acquired from the command processing unit and the waiting queue is realized by setting the values of M and N. In this case, regarding the plurality of waiting queues as a whole, the source of the command processing unit can set the priority of the differentiated IO command to adapt to different service scenarios. For example, if the IO commands in a short time collectively access a small logical address range, which results in a greater probability that the IO command request does not lock, a higher priority is set for the wait queue as a whole, so as to avoid that the IO commands in the wait queue are frequently preempted by the IO commands from the command allocating unit 230.

In this way, IO commands that cannot be processed temporarily because a lock is not available are placed in the wait queue. IO commands in each wait queue are processed sequentially in the order they entered the wait queue. Therefore, the lock is prevented from being preempted by the IO command obtained in advance by the IO command obtained in the later, and the prior IO command cannot be processed for a long time.

Fig. 5 is a flow chart of processing an IO command according to an embodiment of the present application, executed by CPU 240 or CPU 250 (see also fig. 2).

Illustratively, the CPU 240 obtains a first IO command from the command allocating unit 230 (510), and obtains a logical address accessed by the first IO command, and obtains a first waiting queue (e.g., waiting queue 2) to which the logical address is mapped according to the logical address accessed by the first IO command. CPU 240 checks whether the wait queue (e.g., wait queue 2) to which the logical address of the first IO command is mapped has IO commands pending (520). And if the queue 2 does not have the IO command to be processed, processing the first IO command (550), requesting a lock of the FTL table entry corresponding to the logical address accessed by the first IO command according to the logical address accessed by the first IO command, and continuing processing the first IO command in response to the lock being obtained. If the request lock fails, the first IO command is added to the tail of wait queue 2.

when the CPU 240 checks the waiting queue 2, if the IO command waiting for processing in the waiting queue 2 is found (520), the IO command waiting for processing in the waiting queue 2 is preferentially processed (530), and the first IO command is added to the tail of the waiting queue 2 (540).

for example, to process an IO command to be processed in the wait queue 2, a second IO command is obtained from the head of the wait queue 2, a lock of a corresponding FTL entry is requested according to a logical address accessed by the second IO command, and the lock is obtained in response to the logical address accessed by the second IO command, so that the second IO command is continuously processed. If the logical address request lock accessed by the second IO command fails, the second IO command is retained at the head of the wait queue 2.

The CPU 240 then continues to acquire IO commands from the command assigning unit 230. If there are no pending commands yet in the command distribution unit 230, the IO commands are also obtained from the head of one or more waiting queues.

FIG. 6 is a flow chart of processing IO commands according to yet another embodiment of the present application, also executed by CPU 240 or CPU 250 (see also FIG. 2).

To process the IO command, for example, CPU 240 selects the IO command source from which to obtain the IO command (610). The IO command source may be the command distribution unit 230, or one or more wait queues. Optionally, priority is set for each IO command source, the IO command source is selected according to the priority, and the IO command is obtained from the selected IO command source. Still alternatively, the sources of the IO commands are alternately selected or alternately selected according to the weights.

And for the IO command acquired from the selected command source, requesting a lock (620) of the FTL table entry corresponding to the logical address according to the logical address accessed by the IO command. If the request lock is successful, the IO command continues to be processed according to the acquired physical address (640).

If the request lock fails, different processing modes are adopted according to whether the source of the IO command is a waiting queue or a command distribution unit. If the IO command comes from the command distribution unit, determining a corresponding waiting queue according to the logical address accessed by the IO command, and adding the IO command to the tail of the waiting queue (630). The IO command source is again selected to obtain the IO command. If the IO command comes from the waiting queue, the IO command is left at the head of the waiting queue without further processing, and the source of the IO command is selected again to obtain the IO command.

In an alternative embodiment, the IO command obtained from the command distribution unit accesses a logical address corresponding to one or more FTL entries. For example, the CPU 240 further splits the IO command acquired from the command allocating unit into sub-commands, each of which accesses one entry of the FTL table. And processing each sub-command, requesting the lock of the corresponding FTL table entry, and if the lock request is successful, continuing to process the sub-command according to the physical address acquired from the FTL table entry. If the lock of the FTL table entry corresponding to the sub-command request fails, the sub-command is added to one of the waiting queues according to the logic address of the sub-command. And when all the subcommands of the IO command are processed, the IO command is processed.

In still alternative embodiments, only read commands or only write commands are processed in the embodiments provided according to the present application.

While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种数据处理装置及方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类