Storage medium replacement for NVM groups

文档序号:1627676 发布日期:2020-01-14 浏览:11次 中文

阅读说明:本技术 Nvm组的存储介质替换 (Storage medium replacement for NVM groups ) 是由 田冰 于 2018-07-05 设计创作,主要内容包括:公开了NVM组的存储介质替换。所公开的NVM组的存储介质替换方法,包括:选择待替换的源逻辑单元与目的逻辑单元;将源逻辑单元的数据复制到目的逻辑单元;还更新元数据,以使得对源逻辑单元的访问被映射到目的逻辑单元。(Storage medium replacement for NVM sets is disclosed. The disclosed storage medium replacement method for an NVM group includes: selecting a source logic unit and a target logic unit to be replaced; copying the data of the source logic unit to a target logic unit; the metadata is also updated such that accesses to the source logical unit are mapped to the destination logical unit.)

1. A storage medium replacement method for an NVM set, comprising:

selecting a source logic unit and a target logic unit to be replaced;

copying the data of the source logic unit to a target logic unit;

the metadata is also updated such that accesses to the source logical unit are mapped to the destination logical unit.

2. The method of claim 1, further comprising:

the physical block of the source logical unit is erased.

3. The method of claim 1 or 2, further comprising:

the destination logical unit is set to an occupied state and the source logical unit is set to an idle state.

4. The method according to one of claims 1-3, wherein:

copying partial data of the source logical unit to the destination logical unit;

the method further comprises the following steps: copying yet another portion of the data of the source logical unit to the first logical unit.

5. The method according to one of claims 1 to 4, wherein

The physical address of the copied data on the destination logical unit is the same as the physical address on the source logical unit.

6. The method of one of claims 1-5, further comprising:

acquiring a first large block on the source logic unit;

selecting a first logic cell from a plurality of logic cells including the destination logic cell;

selecting a first physical block from the first logical unit to replace a physical block provided by the source logical unit for the first chunk; and

copying data of the physical block provided by the source logical unit for the first large block to the first physical block.

7. The method of claim 6, wherein

Selecting a first logical unit from the plurality of logical units at a designated probability, and the plurality of logical units each having a probability of being selected.

8. The method of claim 7, wherein

The probability that the first logic cell is selected is greater than the probability that other logic cells than the first logic cell of the plurality of logic cells are selected.

9. The method according to one of claims 6-8, further comprising:

in response to creating the second chunk, a second plurality of logical units is selected and physical blocks for constructing the second chunk are retrieved from the second plurality of logical units.

10. A memory device comprising control means and a NVM chip, the control means performing the method according to one of claims 1-9.

Technical Field

The present application relates to storage devices, and more particularly, to replacement or data migration of storage media of a storage device that make up an NVM group (NVMSet).

Background

FIG. 1 illustrates a block diagram of a storage device. The solid-state storage device 100 is coupled to a host for providing storage capability to the host. The host and the solid-state storage device 100 may be coupled by various methods, including, but not limited to, connecting the host and the solid-state storage device 100 by, for example, SATA (Serial Advanced Technology Attachment), SCSI (small computer System Interface), SAS (Serial Attached SCSI), IDE (Integrated Drive Electronics), USB (universal Serial Bus), PCIE (Peripheral Component Interconnect Express, PCIE, high-speed Peripheral Component Interconnect), NVMe (NVM Express, high-speed nonvolatile storage), ethernet, fiber channel, wireless communication network, etc. The host may be an information processing device, such as a personal computer, tablet, server, portable computer, network switch, router, cellular telephone, personal digital assistant, etc., capable of communicating with the storage device in the manner described above. The Memory device 100 includes an interface 110, a control unit 120, one or more NVM chips 130, and a DRAM (Dynamic Random Access Memory) 140.

NAND flash Memory, phase change Memory, FeRAM (Ferroelectric RAM), MRAM (magnetoresistive Memory), RRAM (Resistive Random Access Memory), etc. are common NVM.

The interface 110 may be adapted to exchange data with a host by means such as SATA, IDE, USB, PCIE, NVMe, SAS, ethernet, fibre channel, etc.

The control unit 120 is used to control data transfer between the interface 110, the NVM chip 130, and the DRAM 140, and also used for memory management, host logical address to flash physical address mapping, erase leveling, bad block management, and the like. The control part 120 may be implemented in various ways of software, hardware, firmware or a combination thereof, for example, the control part 120 may be in the form of an FPGA (Field-programmable gate array), an ASIC (Application-specific integrated Circuit), or a combination thereof. The control component 120 may also include a processor or controller in which software is executed to manipulate the hardware of the control component 120 to process IO (Input/Output) commands. The control component 120 may also be coupled to the DRAM 140 and may access data of the DRAM 140. FTL tables and/or cached IO command data may be stored in the DRAM.

Control unit 120 includes a flash interface controller (or referred to as a media interface controller, a flash channel controller) coupled to NVM chip 130 and issuing commands to NVM chip 130 in a manner that conforms to an interface protocol of NVM chip 130 to operate NVM chip 130 and receive command execution results output from NVM chip 130. Known NVM chip interface protocols include "Toggle", "ONFI", etc.

The memory Target (Target) is one or more Logic Units (LUNs) of a shared Chip Enable (CE) signal within the NAND flash package. One or more dies (Die) are included within the NAND flash memory package. Typically, a logic cell corresponds to a single die. The logical unit may include a plurality of planes (planes). Multiple planes within a logical unit may be accessed in parallel, while multiple logical units within a NAND flash memory chip may execute commands and report status independently of each other. In "Open NAND Flash Interface Specification (Revision 3.0)" available from http:// www.micron.com// media/Documents/Products/Other% 20Documents/ON FI3_0gold. ashx, the meaning for target (target), Logical Unit (LUN), Plane (Plane) is provided, which is part of the prior art.

Data is typically stored and read on a storage medium on a page-by-page basis. And data is erased in blocks. A block (also referred to as a physical block) contains a plurality of pages. Pages on the storage medium (referred to as physical pages) have a fixed size, e.g., 17664 bytes. Physical pages may also have other sizes.

In the solid-state storage device, mapping information from logical addresses to physical addresses is maintained using FTL (Flash Translation Layer). The logical addresses constitute the storage space of the solid-state storage device as perceived by upper-level software, such as an operating system. The physical address is an address for accessing a physical memory location of the solid-state memory device. Address mapping may also be implemented in the prior art using an intermediate address modality. E.g. mapping the logical address to an intermediate address, which in turn is further mapped to a physical address.

A table structure storing mapping information from logical addresses to physical addresses is called an FTL table. FTL tables are important metadata in solid state storage devices. Usually, the data entry of the FTL table records the address mapping relationship in the unit of data page in the solid-state storage device.

Fig. 2 shows a schematic diagram of a large block. A large block includes physical blocks from each of a plurality of logical units (referred to as a group of logical units). Preferably, each logical unit provides one physical block for a large block. By way of example, large blocks are constructed on every 16 Logical Units (LUNs). Each large block includes 16 physical blocks, from each of 16 Logical Units (LUNs). In the example of FIG. 2, large block 0 includes physical block 0 from each of the 16 Logical Units (LUNs), and large block 1 includes physical block 1 from each Logical Unit (LUN). There are many other ways to construct the bulk mass. For example, chinese patent application 2017107523210 (entitled method and apparatus for recycling garbage based on variable length blocks) provides a way to construct blocks that can be made longer.

As an alternative, page stripes are constructed in large blocks, with physical pages of the same physical address within each Logical Unit (LUN) constituting a "page stripe". In FIG. 2, physical page P0-0, physical page P0-1 … …, and physical page P0-x form page stripe 0, where physical page P0-0, physical page P0-1 … …, physical page P0-14, are used to store user data, and physical page P0-15 is used to store parity data computed from all user data within the stripe. Similarly, in FIG. 2, physical pages P2-0, P2-1 … …, and P2-x constitute page strip 2. Alternatively, the physical page used to store parity data may be located anywhere in the page stripe.

The storage device also performs wear leveling operations such that each physical block experiences substantially the same number of erases during use of the storage device, thereby reducing the adverse effects of the end of life of individual physical blocks on the life of the storage device.

To improve the quality of service of storage devices, the provision of an NVM group (NVMSet) mechanism in a storage device is being explored (see fig.)https://www.snia.org/sites/default/files/SDCEMEA/2018/Presentations/ Achieving-Predictable-Latency-Solid-State-Storage-SSD-SNIA-SDC-EMEA- 2018.pdf)。NVMA group is a collection of non-volatile storage media. The non-volatile storage media in different NVM groups are independent of each other. For example, a non-volatile storage medium belonging to one NVM group does not belong to another NVM group. By differentiating the NVM groups in the storage device, the impact of IO commands accessing some non-volatile storage media on the processing performance of IO commands accessing other non-volatile storage media is reduced or eliminated. The durability group (Endurance group) is also discussed. The endurance set may include one or more NVM sets. There are one or more endurance groups in the storage device.

Namespaces (NS) are also defined in the NVMe protocol. A namespace of size n is a set of logical blocks with logical block addresses from 0 to n-1. Namespaces can be uniquely identified by a Namespace ID (NSID). The non-volatile storage media used by the same namespace are from the same NVM group, and not from multiple NVM groups. Two or more namespaces may use non-volatile storage media from the same NVM group.

Disclosure of Invention

Once the NVM sets are provided, wear imbalance will occur between the NVM sets in the storage device as the NVM sets are used. There is a further need to provide wear leveling capabilities between NVM groups to extend the life of the storage device. Wear leveling between NVM groups is achieved by replacement of storage media such as logical cells.

According to a first aspect of the present application, there is provided a storage medium replacement method for a first NVM group according to the first aspect of the present application, comprising: selecting a source logic unit and a target logic unit to be replaced; copying the data of the source logic unit to a target logic unit; the metadata is also updated such that accesses to the source logical unit are mapped to the destination logical unit.

According to the storage medium replacement method of the first NVM group of the first aspect of the present application, there is provided a second storage medium replacement method of an NVM group according to the first aspect of the present application, further comprising: the physical block of the source logical unit is erased.

According to the storage medium replacement method of the first or second NVM group of the first aspect of the present application, there is provided a third storage medium replacement method of an NVM group according to the first aspect of the present application, further comprising: the destination logical unit is set to an occupied state and the source logical unit is set to an idle state.

According to one of the storage medium replacement methods of the first to third NVM groups of the first aspect of the present application, there is provided a fourth storage medium replacement method of an NVM group according to the first aspect of the present application, wherein: copying partial data of the source logical unit to the destination logical unit; the method further comprises the following steps: copying yet another portion of the data of the source logical unit to the first logical unit.

According to one of the storage medium replacement methods of the first to fourth NVM groups of the first aspect of the present application, there is provided a fifth storage medium replacement method of an NVM group according to the first aspect of the present application, wherein the virtual logical unit mapped to the source logical unit is modified to be mapped to the destination logical unit by updating metadata.

According to one of the storage medium replacement methods of the first to fourth NVM groups of the first aspect of the present application, there is provided a sixth storage medium replacement method of an NVM group according to the first aspect of the present application, wherein the logical address of accessing the copied data is modified to be mapped to the logical address of the destination logical unit by updating the metadata.

According to one of the storage medium replacement methods of the first to sixth NVM sets of the first aspect of the present application, there is provided a seventh storage medium replacement method of an NVM set according to the first aspect of the present application, wherein the physical address of the copied data on the destination logical unit is the same as the physical address on the source logical unit.

According to one of the storage medium replacement methods of the first to seventh NVM groups of the first aspect of the present application, there is provided an eighth storage medium replacement method of an NVM group according to the first aspect of the present application, further comprising: acquiring a first large block on the source logic unit; selecting a first logic cell from a plurality of logic cells including the destination logic cell; selecting a first physical block from the first logical unit to replace a physical block provided by the source logical unit for the first chunk; and copying data of the physical block provided by the source logical unit for the first large block to the first physical block.

According to the storage medium replacement method of the eighth NVM group of the first aspect of the present application, there is provided the ninth storage medium replacement method of the NVM group of the first aspect of the present application, wherein a first logical unit is selected with a designated probability from the plurality of logical units, and the plurality of logical units each have a probability of being selected.

According to the storage medium replacement method of the ninth NVM group of the first aspect of the present application, there is provided the tenth storage medium replacement method of the NVM group of the first aspect of the present application, wherein the probability that the first logic unit is selected is greater than the probability that other logic units than the first logic unit among the plurality of logic units are selected.

According to the storage medium replacement method of the eighth or ninth NVM group of the first aspect of the present application, there is provided the eleventh storage medium replacement method of the NVM group of the first aspect of the present application, further comprising setting a probability of selection of each logical unit.

According to the storage medium replacement method of the eleventh NVM group of the first aspect of the present application, there is provided the twelfth storage medium replacement method of the NVM group of the first aspect of the present application, wherein substantially the same probability of being selected is set for each logic unit at the first stage of the life cycle of the storage device; and setting different selected probabilities for each logic unit in a second stage of the life cycle of the storage device.

According to the method for replacing the storage medium of the eleventh or twelfth NVM group of the first aspect of the present application, there is provided a thirteenth storage medium replacing method of the NVM group of the first aspect of the present application, wherein the probability of being selected for one or more logic units is positively or negatively correlated to the number of times the logic units are subjected to erasing.

According to one of the storage medium replacement methods of the eleventh to thirteenth NVM sets of the first aspect of the present application, there is provided the fourteenth storage medium replacement method of the NVM set of the first aspect of the present application, wherein the probability of the source logical unit being selected is increased in response to the source logical unit being expected to be replaced.

According to one of the storage medium replacement methods of the eighth to fourteenth NVM sets of the first aspect of the present application, there is provided a fifteenth storage medium replacement method of an NVM set according to the first aspect of the present application, further comprising: in response to creating the second chunk, a second plurality of logical units is selected and physical blocks for constructing the second chunk are retrieved from the second plurality of logical units.

According to the storage medium replacement method of the fifteenth NVM group of the first aspect of the present application, there is provided a sixteenth storage medium replacement method of an NVM group according to the first aspect of the present application, further comprising: selecting a logical unit to obtain the second plurality of logical units according to a specified probability.

According to the storage medium replacement method of the fifteenth or sixteenth NVM group of the first aspect of the present application, there is provided the seventeenth storage medium replacement method of the NVM group of the first aspect of the present application, wherein each of the second plurality of logic cells belongs to the same endurance group or NVM group.

According to one of the storage medium replacement methods of the first to seventeenth NVM groups of the first aspect of the present application, there is provided an eighteenth storage medium replacement method of an NVM group according to the first aspect of the present application, wherein the source logical unit and the destination logical unit belong to the same endurance group.

According to one of the storage medium replacement methods of the eighth to seventeenth NVM groups of the first aspect of the present application, there is provided a nineteenth storage medium replacement method of the NVM group according to the first aspect of the present application, wherein each of the plurality of logic units belongs to the same endurance group.

According to a second aspect of the present application, there is provided a first storage device according to the second aspect of the present application, comprising a control means and an NVM chip, the control means performing one of the storage medium replacement methods of the NVM group according to the first aspect of the present application.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.

FIG. 1 is a block diagram of a storage device;

FIG. 2 is a schematic diagram in bulk;

FIG. 3A is a schematic diagram of an NVM group replacement non-volatile storage medium according to an embodiment of the present application;

FIG. 3B illustrates a flow diagram of data migration according to an embodiment of the present application;

FIG. 4A is a schematic illustration of an NVM group replacement non-volatile storage medium according to yet another embodiment of the present application;

FIG. 4B is a schematic diagram of the result of replacing a non-volatile storage medium with a set of NVMs of FIG. 4A according to the embodiment of the present application;

FIG. 5 illustrates a flow diagram of data migration according to the embodiments of FIGS. 4A and 4B of the present application;

FIG. 6A is a schematic diagram of migrating a non-volatile storage media in accordance with another embodiment of the present application; and

FIG. 6B shows a schematic diagram of the results of migrating a non-volatile storage medium according to the embodiment of FIG. 6A of the present application.

Detailed Description

The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

FIG. 3A is a schematic diagram of an NVM group replacement non-volatile storage medium according to an embodiment of the present application.

According to the embodiment of FIG. 3A, the storage device includes a plurality of logical units (LUN 0-LUN 9). The storage device provides a plurality of NVM groups (NVM group 310, NVM group 312, and NVM group 314). Optionally, NVM group 310 and NVM group 312 belong to endurance group 320, and NVM group 322 belongs to endurance group 322. It is to be appreciated that a storage device according to other embodiments of the present application may not provide a endurance group.

According to an embodiment of the present application, a non-volatile storage medium is allocated for an NVM group in units of LUNs. NVM group 310 is assigned LUN0 and LUN 1, NVM group 312 is assigned LUN 2 and LUN3, and NVM group 314 is assigned LUN6 to LUN 7. Since NVM group 310 and NVM group 312 belong to the same endurance group 320, the logical cells of NVM group 310 and NVM group 312 may be swapped, thereby achieving wear leveling within endurance group 320. For example, LUN 1 is exchanged with LUN 2 such that LUN 1 is allocated to NVM group 312 and LUN 2 is allocated to NVM group 310, and data migration is performed between LUN 1 and LUN 2 along with the LUN 1 and LUN 2 exchange.

Optionally, the storage device further comprises free logical units (LUN 4 and LUN 5). A free logical unit is a logical unit that has not been allocated to any NVM group. For example, when LUN 2 is excessively worn out due to frequent writing, LUN4 is allocated to NVM group 312 to replace LUN 2, and LUN 2 is removed from NVM group 312, thereby implementing wear leveling. With LUN4 exchanged with LUN 2, data migration is performed between LUN4 and LUN 2.

Alternatively or additionally, a free logical unit (e.g., LUN 4) is assigned to durability group 320, then LUN4 is no longer assigned to other durability groups (durability group 322) than durability group 320. Such that each LUN, once used for a certain endurance group, is only used for that endurance group, or as a free logical unit, during the lifetime of the storage device.

In addition to assigning non-volatile storage media to NVM groups in units of logical units, in other embodiments, non-volatile storage media are assigned to NVM groups in units of NVM chips, dies (Die), or targets (Target).

According to embodiments of the present application, a logical unit of a storage device may be in a plurality of states, such as an occupied state, an idle state, and a migrating state. The logical units in the idle state are not allocated to any NVM group. Logical units that are allocated to the NVM group and that are not involved in data migration are in an occupied state. The logical unit undergoing data migration is in a migrating state.

FIG. 3B illustrates a flow diagram of data migration according to an embodiment of the present application.

The storage device initiates data migration according to the usage of the logical units of each NVM group or as a response to a data migration command sent by the host to the storage device.

By way of example, NVM group 312 undertakes more write operations such that LUN 2 is erased a significantly higher number of times than LUN3 (e.g., as compared to other logical units of the storage device), and data migration is initiated.

For data migration, referring to FIG. 3B, the source logical unit (LUN (s)) and the destination logical unit (LUN (d)) (340) to be migrated are selected, where the source logical unit is the logical unit in the NVM group to be migrated that is in an occupied state and the destination logical unit is a free logical unit. And marking the selected source logical unit and the selected destination logical unit as a transition state. By way of example, referring also to FIG. 3, LUN3 of NVM group 312 is selected as the source logical unit, and LUN4 is selected as the destination logical unit.

The data of the source logical unit (LUN 3) is copied to the destination logical unit (LUN 4) (350). Alternatively, the data of the source logical unit (LUN 3) is copied to other logical units (e.g., LUN 2) belonging to the NVM group 312 except the destination logical unit (LUN 4), while the object of the present application can be achieved as well. Optionally, only valid data in the source logical unit is migrated, so that the efficiency of data migration is improved, and the data writing amount in the data migration process is reduced.

The metadata of the storage device is also updated to record the new storage locations after the data migration (360). For example, for each migrated data, its physical address in the destination logical unit or other logical units is recorded in the FTL table of the storage device. Therefore, when the host accesses data by using a logical address, the accessed logical address is queried by the FTL table, and the target logical unit (LUN 4) or other logical units (LUN 2) provide corresponding physical addresses.

In an alternative embodiment, there is no FTL table in the storage device, and the host accesses the storage device with a physical address. The storage device provides virtual logical units (vlluns), and provides a mapping of virtual logical units to logical units. In response to migrating the data of the source logical unit to the destination logical unit in the data migration, the metadata is updated to modify the virtual logical unit originally mapped to the source logical unit to be mapped to the destination logical unit. And in the data migration process, copying the data to the target logical unit only and not to other logical units, and not changing the physical address of the migrated data in the logical unit. For example, the data at physical address P1 of the source logical unit (LUN 3) is copied to physical address P1 of the destination logical unit (LUN 4). Alternatively or further, if the physical address P1 of LUN4 is a bad block, the storage device reports to the host that a bad block or data error occurs at the physical address P1 of the virtual logical unit corresponding to LUN3, and the host initiates a process of processing the data error, for example, moving all data or all valid data of the block where the physical address P1 is located to a new physical address.

LUN3 is marked as idle in response to completion of the data migration for all data or all valid data of the source logical unit (LUN 3). Optionally, all physical blocks of LUN3 are also erased. And marking the destination logical unit (LUN 4) as occupied.

During the data migration of the source logical unit, for the migrated part of the data of the source logical unit, the part of the data exists on both the source logical unit and the destination logical unit (or other logical units, the other logical units are logical units of the same source logical unit belonging to the same NVM group and different from the source logical unit). The read command for the portion of data may be responded to by both the source logical unit and the destination logical unit. For portions of data of the source logical unit that have not yet been migrated, a read command for the portion of data is responded to with the source logical unit. For example, the FTL table is queried according to the logical address accessed by the read command to obtain the corresponding physical address, and the obtained physical address is accessed to obtain the data to be read by the read command. For a write command, a physical address is assigned from the destination logical unit (or other logical unit), and the data to be written by the write command is written to the assigned physical address.

FIG. 4A is a schematic illustration of a NVM group replacement non-volatile storage medium according to yet another embodiment of the present application.

According to the embodiment of FIG. 4A, the storage device includes a plurality of logical units (LUN 0-LUN 9). The storage device provides a plurality of NVM groups (NVM group 410, NVM group 412, and NVM group 414). Optionally, NVM group 410 and NVM group 412 belong to endurance group 420, and NVM group 422 belongs to endurance group 422. It is to be appreciated that a storage device according to other embodiments of the present application may not provide a endurance group. NVM group 410 is assigned LUN0 and LUN 1, NVM group 412 is assigned LUN 2 and LUN3, and NVM group 414 is assigned LUN6 to LUN 7. The storage device also includes free logical units (LUN 4 and LUN 5).

According to embodiments of the present application, the bulk is provided by individual logical units of the NVM group. The logical units that provide the physical blocks for the same chunk are from the same endurance group. Referring to FIG. 4A, each of large block 430, large block 432, and large block 438 includes the physical blocks provided by LUNs 0-LUN 3. And a write command to access the namespace provided by NVM group 410, assign it, for example, to large block 430, and write data to the physical block provided by LUN0 and/or LUN 1 for large block 430, but not to the physical block provided by LUN 2 or LUN3 for large block 430. Large block 450, large block 452, and large block 458 each comprise physical blocks provided by LUN 6-LUN 9.

According to the embodiment shown in FIG. 4A, the bulk block is constructed from logical units belonging to the same endurance group, so that the logical units providing the physical block for the bulk block are from the same endurance group, and may be from the same or different NVM groups.

In an alternative embodiment, the large block is constructed from logical units belonging to the same NVM group, such that each logical unit providing the physical block for the large block is from the same NVM group.

With continued reference to FIG. 4A, physical blocks belonging to the same chunk have the same physical block address within their respective logical units. Alternatively, the large block is constructed according to the technical solution provided in chinese patent application 201610814552.5 (data organization method and apparatus for multi-plane flash memory), and particularly, the patent application combines the large block construction method provided in the related description of fig. 4A, fig. 4B, fig. 5, fig. 6A or fig. 6B.

With continued reference to FIG. 4A, data migration is performed for LUN3 and free LUN4 belonging to NVM group 412.

FIG. 4B is a schematic diagram illustrating the result of replacing a non-volatile storage medium with a set of NVMs according to the embodiment of FIG. 4A herein.

After performing data migration on LUN3 and LUN4, LUN3 is referred to as a free LUN in the storage device, and LUN4 becomes a logical unit belonging to the NVM group 412. And large block 430, large block 432, and large block 438, each including the physical blocks provided by LUN 0-LUN 2, and LUN 4.

In one embodiment, the storage device maintains a chunk table in which the logical units that construct each chunk are recorded. For example, each logical unit number that constructs large block 430, large block 432, and large block 438 is updated in the large block table in response to data migration of LUN3 and LUN 4.

In yet another embodiment, the chunks are constructed from virtual logical units. Therefore, although the data of LUN3 is migrated to LUN4, LUN4 is still referenced by the same virtual logical unit, and thus, when a large block is constructed or accessed from the virtual logical units, it is not possible to perceive that LUN3 and LUN4 have data migration.

The data recorded for each physical block of the large block may be user data or parity data. In the data migration process, if the large physical block provided by the source logical unit stores user data, after the data migration, the physical address of the user data is updated in the FTL table. If the large physical block provided by the source logical unit stores the check data, the FTL table does not need to be updated correspondingly.

According to the embodiment shown in FIGS. 4A and 4B, although the logical units providing the physical blocks for large block 430, large block 432, and large block 438 belong to two NVM groups, these logical units belong to the same endurance group 420. Garbage collection operations for the storage device occur within the endurance group. For example, valid data retrieved from chunk 438 is written to the chunk constructed by the logical units of endurance group 420. Further, for valid data from NVM group 410 in large block 438, the physical block of the constructed large block provided by the logical units belonging to NVM group 410 is still written. While for valid data from NVM group 412 in large block 438, the physical block of the constructed large block provided by the logical units belonging to NVM group 412 is still written. Thereby ensuring that user data belonging to NVM group 410 is not written to physical blocks belonging to NVM group 412 nor to physical blocks belonging to NVM group 414.

FIG. 5 illustrates a flow diagram of data migration according to the embodiments of FIGS. 4A and 4B of the present application.

The storage device initiates data migration according to the usage of the logical units of each NVM group or as a response to a data migration command sent by the host to the storage device.

By way of example, the storage device receives a host indication to perform a data migration. By way of example, the host also indicates that the selected source logical unit is LUN3 and the destination logical unit is LUN4 (see also FIGS. 4A and 4B) (510). And marking the selected source logical unit and the selected destination logical unit as a transition state.

Respective large blocks on the source logical unit (LUN 3) are obtained (530). A large block on a logical unit is a large block to which the physical block of the logical unit belongs. And identifying whether each chunk on the source logical unit is stored on the source logical unit as user data or parity data. And physical blocks on the source logical unit (LUN 3) that do not belong to any large block are not migrated. For example, the chunks on the source lun are retrieved from a chunk table maintained by the storage device.

The part of data belonging to the source logical unit of each chunk on the acquired source logical unit (LUN 3) is copied to the destination logical unit (LUN 4) (540), and the migrated data has the same physical address in the source logical unit and in the destination logical unit. And updating metadata including the chunk table and the FTL table (550). The updated large block table records the new physical blocks (provided by the destination logical unit (LUN 4)) of each large block where data migration occurred. The updated FTL table records the physical address of the user data subjected to data migration on the destination logical unit (LUN (4)). If the data migration occurs to check data in large blocks, the FTL table does not need to be updated for the check data.

According to the embodiment of fig. 5, the part where data migration occurs, whether it is valid data or invalid data, is migrated, so that the new chunk obtained after data migration still meets the data check rule.

In an alternative embodiment, there is no FTL table in the storage device, and the host accesses the storage device with a physical address. The storage device provides virtual logical units (vlluns), and provides a mapping of virtual logical units to logical units. In response to migrating the data of the source logical unit to the destination logical unit in the data migration, the metadata is updated to modify the virtual logical unit originally mapped to the source logical unit (LUN 3) to map to the destination logical unit (LUN 4). And in the data migration process, copying the data to the target logical unit only and not to other logical units, and not changing the physical address of the migrated data in the logical unit.

In response to completing the data migration for all data of the source logical unit (LUN 3), LUN3 is marked as an idle state. Optionally, all physical blocks of LUN3 are also erased. And marking the destination logical unit (LUN 4) as occupied.

FIG. 6A is a schematic diagram of migrating a non-volatile storage media according to another embodiment of the present application.

According to the embodiment of FIG. 6A, the storage device includes multiple logical units (LUNs 620-LUN 628) belonging to the same NVM group or endurance group. The storage device also includes a free logical unit (lun (d)). The source logical unit lun(s)620 is exchanged with the destination logical unit lun (d) through data migration.

In fig. 6A, a dashed box indicates a physical block, and a numeral in the dashed box indicates a large block to which the physical block belongs. According to the embodiment of fig. 6A, the individual physical blocks that make up a large block need not have the same physical address, but may be located anywhere in the logical unit. By way of example, a large block of physical blocks follows the rule that no two of the plurality of physical blocks that make up the large block are from the same logical unit, in other words, the individual physical blocks that make up the large block are from different logical units.

Still by way of example, large block 1 includes physical blocks from each of LUN620, LUN 622, and LUN624, and large block 4 includes physical blocks from each of LUN620, LUN 622, and LUN 626. Alternatively, each large block does not necessarily include the same number of physical blocks.

Optionally, the storage device maintains a chunk table in which the physical chunks that make up each chunk are recorded. Alternatively, information of all physical blocks constituting a large block is recorded in each physical block of the large block.

To swap source logical unit LUN(s)620 with destination logical unit LUN (d), data in the plurality of physical blocks in source logical unit LUN(s)620 (indicated by the dashed boxes with labels 1, 2, 3, 4, 5 in LUN(s) 620) that have been used to build the large block needs to be migrated to destination logical unit LUN (d) and/or other logical units (LUN 622, LUN624, LUN626, or LUN 628) of which source logical unit LUN(s)620 belongs to the same NVM group or endurance group.

FIG. 6B shows a schematic diagram of the results of replacing a non-volatile storage medium according to the embodiment of FIG. 6A of the present application.

In fig. 6B, data of each physical block (indicated by a shaded dotted box) of the source logical unit lun(s)620 is migrated to other logical units, and a physical block as a data migration target is indicated by a dotted box with a number of prime.

Physical blocks indicated by numbers 1, 2 and 3 of source logical unit lun(s)620 are migrated to destination logical unit lun (d); physical blocks indicated by number 4 of source logical unit LUN(s)620 are migrated to logical unit LUN 626; the physical block indicated by numeral 5 of source logical unit LUN(s)620 is migrated to logical unit LUN 624.

In this way, the data to be migrated in the source logical unit LUN(s)620 is written into the plurality of logical units (the destination logical unit LUN (d), the logical unit LUN626, and the logical unit LUN624), so that the amount of data written into the destination logical unit LUN (d) is reduced, the written data is distributed into the plurality of logical units, the write operation caused by data migration is processed in parallel on the plurality of logical units, and the process of data migration is also accelerated.

By way of example, to migrate data of the physical block indicated by the number 4 in the source logical unit lun(s)620, a physical block for carrying the data to be migrated is selected. Various policies are considered to select the physical blocks that carry the migrated data. For example, (1) based on the chunk where the migration data is located, the physical chunk is selected to satisfy the condition for constructing the chunk. Referring to FIG. 6A, before data migration, the physical blocks of Large Block 4 are provided by LUN(s)620, LUN 622, and LUN626, and the newly selected physical blocks are avoided from these logical units to meet the conditions for constructing the Large Block. (2) Destination logical unit lun (d) has provided physical blocks for large block 1, large block 2, and large block 3 (see fig. 6B, indicated by the dashed boxes where the numbers 1 ', 2 ', and 3 ' are located), then the newly selected physical block is to be avoided from destination logical unit lun (d). (3) The remaining selectable logical units are LUN624 and LUN 628, and one of LUN624 and LUN 628 is selected, for example, randomly, to carry physical block 4 of source logical unit LUN(s) 620. In the example of FIG. 6B, the physical block provided by LUN 628, indicated by the dashed box with the number 4', is selected to carry the data to be migrated.

Optionally or further, when selecting the logical unit carrying the data to be migrated, the selectable logical units are selected with different probabilities. Still referring to fig. 6A and 6B, the data of the physical block indicated by the dashed box with the number 5 of the source logical unit LUN(s) to be migrated, and the optional logical units bearing the data to be migrated are LUN (d), LUN 622 and LUN 624. The physical blocks provided by the LUN624 are selected to carry the data to be migrated based on the longer remaining life of the LUN624 logical unit (e.g., its lower number of erasures). The logic units with longer residual life are selected to bear the data to be migrated, so that the wear balance of all the logic units of the storage device is realized.

The inventors have also recognized that the uniform wear of each logic cell is beneficial for extending the life of the NVM group or endurance group, but not for swapping logic cells belonging to the NVM group or endurance group with idle logic cells. Because the difference in wear level between the idle logical unit and the occupied logical unit may be large, after swapping, the life of each logical unit of the NVM group or the endurance group may be significantly different. For this reason, as another embodiment, data of the physical block of the dashed box where the number 5 is located of the source logical unit LUN(s) to be migrated, the logical units bearing data to be migrated are LUN (d), LUN 622, and LUN624 are selectable. The physical blocks provided by the LUN624 are selected to carry the data to be migrated based on the remaining life of the logical unit LUN624 being short (e.g., its erase count being high). Thus, the lifetime of LUN624 is consumed at a faster rate than other logical units (see FIG. 6, LUN(s)620, LUN 622, LUN626, and LUN 628). Further, after future swapping of LUN624 with the free logical unit, the life span difference between the free logical unit and other logical units in the NVM group or the endurance group is not too large.

Still optionally, when selecting the logic unit carrying the data to be migrated, the probability of selecting each candidate logic unit is specified, so that each logic unit in the NVM group or the endurance group is used according to the specified probability, and further, as used, the remaining life of each logic unit presents a difference. For example, for candidate logical units LUN624, LUN626, and LUN 628, the 20%, 30%, and 50% probabilities are selected so that life consumption of LUN 628 is the fastest and life consumption of LUN624 is the slowest. Further, when a logical unit needs to be exchanged, LUN 628 is preferably exchanged with an idle logical unit.

Still optionally, in addition to specifying the probability of each candidate logical unit being selected when selecting the logical unit carrying the data to be migrated, the probability of each candidate logical unit being selected for building a chunk is also specified when building a chunk, thereby further enabling the remaining life of each logical unit to exhibit differences as the storage device is used. Thereby facilitating selection of a logical unit from the NVM set or the endurance set that is eligible to be swapped when a logical unit needs to be swapped.

In still alternative embodiments, to build a chunk at an early stage of storage device usage, each logical unit is selected to build a chunk with substantially the same probability to achieve wear leveling of the storage media of the storage device. As the storage device is used and/or the building of large blocks and/or data migration is changed in anticipation of swapping occupied logical units with free logical units, the probability of selecting logical units is such that some logical units are used more to build large blocks and others are used relatively less to build large blocks. For example, the probability of selecting a logical unit is given as the quotient of the sum of the number of times each physical block of the logical unit is erased, divided by the total number of times the erase occurred in the storage device. So that the probability that a logical unit is selected for building a large block or carrying migrated data is positively correlated to the number of times the logical unit is erased.

While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种分布式存储方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类