Address mapping method, solid state disk controller and solid state disk

文档序号:190288 发布日期:2021-11-02 浏览:35次 中文

阅读说明:本技术 地址映射方法、固态硬盘控制器及固态硬盘 (Address mapping method, solid state disk controller and solid state disk ) 是由 苟铎 陈祥 于 2021-06-30 设计创作,主要内容包括:本申请实施例涉及固态硬盘应用领域,公开了一种地址映射方法、固态硬盘控制器及固态硬盘,该地址映射方法,通过将固态硬盘中的超级块划分为若干个业务超级块组和一个热备超级块组,建立每一逻辑地址段中的地址段与业务超级块组中的每一业务超级块的映射关系,并建立每一业务超级块的逻辑地址与物理地址的映射表;当某一业务超级块对应的地址段被反复顺序写时,建立该业务超级块和热备超级块组中的一个热备超级块的映射关系,并刷新该业务超级块的逻辑地址与物理地址的映射表,本申请实施例能够减少映射表对DRAM空间的占用,提高固态硬盘的可靠性。(The embodiment of the application relates to the field of solid state disk application, and discloses an address mapping method, a solid state disk controller and a solid state disk, wherein the address mapping method comprises the steps of dividing super blocks in the solid state disk into a plurality of service super block groups and a hot standby super block group, establishing a mapping relation between an address field in each logic address field and each service super block in the service super block groups, and establishing a mapping table of a logic address and a physical address of each service super block; when the address field corresponding to a certain service super block is repeatedly and sequentially written, establishing a mapping relation between the service super block and a hot standby super block in a hot standby super block group, and refreshing a mapping table of a logical address and a physical address of the service super block.)

1. An address mapping method is applied to a solid state disk, and the method comprises the following steps:

dividing super blocks in a solid state disk into a plurality of service super block groups and a hot standby super block group, wherein each service super block group comprises a plurality of service super blocks, and each hot standby super block group comprises a plurality of hot standby super blocks;

establishing a mapping relation between an address field in each logical address field and each service super block in the service super block group, and establishing a mapping table of a logical address and a physical address of each service super block;

when the address field corresponding to a certain service super block is repeatedly and sequentially written, establishing the mapping relation between the service super block and a hot standby super block in the hot standby super block group, and refreshing the mapping table of the logical address and the physical address of the service super block.

2. The method of claim 1, wherein the dividing the super blocks in the solid state disk into a plurality of service super block groups comprises:

if the solid state disk is used for the first time, sequentially selecting a preset number of service super blocks to determine each service super block group;

and if the solid state disk is used again after being formatted, determining each service super block group according to the erasing times of each service super block.

3. The method of claim 2, wherein determining each service superblock group according to the erasure count of each service superblock comprises:

and selecting a plurality of service superblocks to form a service superblock group according to the erasing times of each service superblock, so that the difference of the average erasing times of different service superblocks is smaller than a preset difference threshold.

4. The method of claim 1, wherein the establishing a mapping relationship between the address field in each logical address field and each superblock in the service superblock group comprises:

segmenting the logic address of the solid state disk, and determining a plurality of logic address segments;

establishing a mapping relation between each logic address field and each service super block group;

dividing each logic address segment into a plurality of address segments, and establishing the mapping relation between each address segment and each super block in the service super block group.

5. The method of claim 1, further comprising:

when the address field corresponding to a certain service super block is repeatedly and sequentially written, selecting a hot standby super block from the hot standby super block group, and writing the host data into the hot standby super block.

6. The method of claim 5, further comprising:

and if a certain hot standby superblock is fully written, acquiring a new hot standby superblock from the hot standby superblock group, and updating the mapping relation between the service superblock and the hot standby superblock in the hot standby superblock group.

7. The method of claim 1, further comprising:

when invalid data in a certain service super block is larger than a preset threshold value, moving the valid data in the service super block to a corresponding hot standby super block, and erasing the service super block;

and adding the hot standby super block corresponding to the service super block into the service super block group corresponding to the service super block, and adding the erased service super block into the hot standby super block group.

8. The method of claim 1, further comprising:

when a bad block appears in a certain service super block, selecting a physical block from the hot standby super block group as a replacement block of the bad block, writing effective data of the bad block into the replacement block, and updating a remapping table; wherein the remapping table stores a mapping of bad blocks to replacement blocks.

9. The method of claim 8, wherein prior to selecting a physical block from the hot spare superblock set as a replacement block for the bad block, the method further comprises: reserving a hot standby superblock from the hot standby superblock group as a replacement block of a bad block;

the selecting a physical block from the hot standby super block group as a replacement block of the bad block comprises the following steps:

and selecting one physical block in the reserved hot standby super block as a replacement block of the bad block, wherein the selected physical block and the bad block are positioned in the same crystal grain.

10. The method of claim 1, further comprising:

obtaining an IO mode of a solid state disk;

judging whether the IO mode is a sequential write mode;

if yes, entering a first address mapping mode, wherein the first address mapping mode comprises: dividing super blocks in a solid state disk into a plurality of service super block groups and a hot standby super block group, wherein each service super block group comprises a plurality of service super blocks, and each hot standby super block group comprises a plurality of hot standby super blocks; establishing a mapping relation between an address field in each logical address field and each super block in a service super block group, and establishing a mapping table of a logical address and a physical address of each service super block; when the address field corresponding to a certain service super block is repeatedly and sequentially written, establishing a mapping relation between the service super block and a hot standby super block in a hot standby super block group, and refreshing a mapping table of a logical address and a physical address of the service super block;

if not, entering a second address mapping mode, wherein the second address mapping mode comprises the following steps: and setting a global mapping table, wherein the global mapping table is used for determining the mapping relation between the logical address and the physical address.

11. A solid state disk controller is applied to a solid state disk, the solid state disk comprises at least one flash memory medium, and the solid state disk controller is characterized by comprising:

at least one processor; and the number of the first and second groups,

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the address mapping method of any one of claims 1-10.

12. A solid state disk, comprising:

the solid state hard disk controller of claim 11;

and the flash memory medium is in communication connection with the solid state hard disk controller.

Technical Field

The present application relates to the field of solid state disk applications, and in particular, to an address mapping method, a solid state disk controller, and a solid state disk.

Background

Solid State Drives (SSD), which are hard disks made of Solid State electronic memory chip arrays, include a control unit and a memory unit (FLASH memory chip or DRAM memory chip). At present, a considerable part of solid state disk systems are Dynamic Random Access Memories (DRAMs), so that SSDs have a large data cache space for caching data.

After the hard disk reports the capacity of the host, the host uses the logical address presented by the application program, and meanwhile, the solid state disk needs to maintain the conversion relationship between the logical address and the physical address due to the large redundancy, the inability to duplicate, and the like.

Currently, a solid state disk usually maintains a mapping table from a logical address to a physical address, i.e., an L2P table. When the SSD writes each user data into the flash memory address space, the mapping relation between the logical address and the physical address is recorded. When the host wants to read the data, the SSD searches the physical page corresponding to the logical page in the mapping table, and then accesses the Flash to read the data and returns the data to the user.

However, the current L2P table method is a centralized idea, and maintenance of the L2P table requires cost; when the traffic volume is large, the L2P table needs to be accessed and updated intensively. During normal operation, the L2P table resides in DRAM, and requires space for the DRAM of the entire L2P table. Once the solid state disk is abnormal, if the capacity is large, a long time is needed for recovering the L2P table; in extreme cases, if the L2P table cannot be recovered, it may even result in user data loss.

Disclosure of Invention

The embodiment of the application aims to provide an address mapping method, a solid-state hard disk controller and a solid-state hard disk, which solve the technical problem that the existing solid-state hard disk mapping table occupies a large space and improve the reliability of the solid-state hard disk.

In order to solve the above technical problem, an embodiment of the present application provides the following technical solutions:

in a first aspect, an embodiment of the present application provides an address mapping method, which is applied to a solid state disk, and the method includes:

dividing super blocks in a solid state disk into a plurality of service super block groups and a hot standby super block group, wherein each service super block group comprises a plurality of service super blocks, and each hot standby super block group comprises a plurality of hot standby super blocks;

establishing a mapping relation between an address field in each logical address field and each super block in a service super block group, and establishing a mapping table of a logical address and a physical address of each service super block;

when the address field corresponding to a certain service super block is repeatedly and sequentially written, establishing the mapping relation between the service super block and a hot standby super block in the hot standby super block group, and refreshing the mapping table of the logical address and the physical address of the service super block.

In some embodiments, the dividing the super blocks in the solid state disk into a plurality of service super block groups includes:

if the solid state disk is used for the first time, sequentially selecting a preset number of service super blocks to determine each service super block group;

and if the solid state disk is used again after being formatted, determining each service super block group according to the erasing times of each service super block.

In some embodiments, the determining each service superblock group according to the number of erasures of each service superblock includes:

and selecting a plurality of service superblocks to form a service superblock group according to the erasing times of each service superblock, so that the difference of the average erasing times of different service superblocks is smaller than a preset difference threshold.

In some embodiments, the establishing a mapping relationship between the address field in each logical address field and each superblock in the service superblock group includes:

segmenting the logic address of the solid state disk, and determining a plurality of logic address segments;

establishing a mapping relation between each logic address field and each service super block group;

dividing each logic address segment into a plurality of address segments, and establishing the mapping relation between each address segment and each super block in the service super block group.

In some embodiments, the method further comprises:

when the address field corresponding to a certain service super block is repeatedly and sequentially written, selecting a hot standby super block from the hot standby super block group, and writing the host data into the hot standby super block.

In some embodiments, the method further comprises:

and if a certain hot standby superblock is fully written, acquiring a new hot standby superblock from the hot standby superblock group, and updating the mapping relation between the service superblock and the hot standby superblock in the hot standby superblock group.

In some embodiments, the method further comprises:

when invalid data in a certain service super block is larger than a preset threshold value, moving the valid data in the service super block to a corresponding hot standby super block, and erasing the service super block;

and adding the hot standby super block corresponding to the service super block into the service super block group corresponding to the service super block, and adding the erased service super block into the hot standby super block group.

In some embodiments, the method further comprises:

when a bad block appears in a certain service super block, selecting a physical block from the hot standby super block group as a replacement block of the bad block, writing effective data of the bad block into the replacement block, and updating a remapping table; wherein the remapping table stores a mapping of bad blocks to replacement blocks.

In some embodiments, before selecting a physical block from the hot spare super block as a replacement block for the bad block, the method further comprises: reserving a hot standby superblock from the hot standby superblock group as a replacement block of a bad block, wherein the reserved hot standby superblock and the bad block are positioned in the same crystal grain;

the selecting a physical block from the hot standby super block as a replacement block of the bad block comprises the following steps:

and taking the reserved hot standby superblock as a replacement block of the bad block.

In some embodiments, the method further comprises:

obtaining an IO mode of a solid state disk;

judging whether the IO mode is a sequential write mode;

if yes, entering a first address mapping mode, wherein the first address mapping mode comprises: dividing super blocks in a solid state disk into a plurality of service super block groups and a hot standby super block group, wherein each service super block group comprises a plurality of service super blocks, and each hot standby super block group comprises a plurality of hot standby super blocks; establishing a mapping relation between an address field in each logical address field and each super block in a service super block group, and establishing a mapping table of a logical address and a physical address of each service super block; when the address field corresponding to a certain service super block is repeatedly and sequentially written, establishing a mapping relation between the service super block and a hot standby super block in a hot standby super block group, and refreshing a mapping table of a logical address and a physical address of the service super block;

if not, entering a second address mapping mode, wherein the second address mapping mode comprises the following steps: and setting a global mapping table, wherein the global mapping table is used for determining the mapping relation between the logical address and the physical address.

In a second aspect, an embodiment of the present application provides a solid state disk controller, which is applied to a solid state disk, where the solid state disk includes at least one flash memory medium, and the solid state disk controller includes:

at least one processor; and the number of the first and second groups,

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the address mapping method of the first aspect.

In a third aspect, an embodiment of the present application provides a solid state disk, including:

the solid state hard disk controller of the second aspect;

and the flash memory medium is in communication connection with the solid state hard disk controller.

In a fourth aspect, the present application further provides a non-transitory computer-readable storage medium storing computer-executable instructions for enabling a solid state disk to perform the address mapping method described above.

The beneficial effects of the embodiment of the application are that: different from the prior art, an address mapping method provided by the embodiment of the present application includes: dividing super blocks in a solid state disk into a plurality of service super block groups and a hot standby super block group, wherein each service super block group comprises a plurality of service super blocks, and each hot standby super block group comprises a plurality of hot standby super blocks; establishing a mapping relation between an address field in each logical address field and each service super block in the service super block group, and establishing a mapping table of a logical address and a physical address of each service super block; when the address field corresponding to a certain service super block is repeatedly and sequentially written, establishing the mapping relation between the service super block and a hot standby super block in the hot standby super block group, and refreshing the mapping table of the logical address and the physical address of the service super block. On one hand, the mapping relation between the logical address and the service super block can be realized by dividing the super block in the solid state disk into a plurality of service super block groups and a hot standby super block group and establishing the mapping relation between the address field in each logical address field and each super block in the service super block groups, on the other hand, when the address field corresponding to a certain service super block is repeatedly and sequentially written, the mapping relation between the service super block and one hot standby super block in the hot standby super block groups is established, and the mapping table of the logical address and the physical address of the service super block is refreshed.

Drawings

One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.

Fig. 1 is a schematic structural diagram of a solid state disk provided in an embodiment of the present application;

fig. 2 is a schematic diagram of a solid state hard disk controller according to an embodiment of the present application;

FIG. 3 is a schematic diagram of an L2P table provided by an embodiment of the present application;

fig. 4 is a flowchart illustrating an address mapping method according to an embodiment of the present application;

FIG. 5 is a detailed flowchart of step S402 in FIG. 4;

FIG. 6 is a diagram illustrating a logical address and service superblock set according to an embodiment of the present application;

FIG. 7 is a diagram illustrating a mapping relationship between address segments and superblocks according to an embodiment of the present application;

fig. 8 is a schematic diagram of a mapping relationship between a service superblock and a hot standby superblock according to an embodiment of the present application;

FIG. 9 is a schematic diagram of another mapping relationship between a service superblock and a hot standby superblock according to an embodiment of the present application;

fig. 10 is a flowchart illustrating another address mapping method according to an embodiment of the present application.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

In addition, the technical features mentioned in the embodiments of the present application described below may be combined with each other as long as they do not conflict with each other.

A typical Solid State Drive (SSD) usually includes a Solid State disk controller (host controller), a flash memory array, a cache unit, and other peripheral units.

The solid state hard disk controller is used as a control operation unit for managing an SSD internal system; flash memory arrays (NAND Flash), as memory cells for storing data, including user data and system data, typically present multiple channels (channels, abbreviated CH), one Channel being independently connected to a set of NAND Flash, e.g. CH0/CH1 … … CHx. The characteristic of the Flash memory (NAND Flash) is that before writing, erasing must be carried out, and the erasing times of each Flash memory are limited; the cache unit is used for caching the mapping table, and the cache unit is generally a Dynamic Random Access Memory (DRAM). Other peripheral units may include sensors, registers, and the like.

The technical scheme of the application is specifically explained in the following by combining the drawings in the specification.

Referring to fig. 1, fig. 1 is a schematic structural diagram of a solid state disk according to an embodiment of the present disclosure.

As shown in fig. 1, the solid state disk 100 includes a flash memory medium 110 and a solid state disk controller 120 connected to the flash memory medium 110. The solid state disk 100 is in communication connection with the host 200 in a wired or wireless manner, so as to implement data interaction.

The Flash memory medium 110, which is a storage medium of the solid state disk 100 and is also called a Flash memory, a Flash memory or a Flash granule, belongs to one of storage devices, and is a nonvolatile memory, which can store data for a long time without current supply, and the storage characteristics of the Flash memory medium 110 are equivalent to those of a hard disk, so that the Flash memory medium 110 can become a basis of storage media of various portable digital devices.

The FLASH memory medium 110 may be Nand FLASH, which uses a single transistor as a storage unit of binary signals, and has a structure very similar to that of a common semiconductor transistor, except that a floating gate and a control gate are added to the single transistor of the Nand FLASH, the floating gate is used for storing electrons, the surface of the floating gate is covered by a layer of silicon oxide insulator and is coupled with the control gate through a capacitor, when a negative electron is injected into the floating gate under the action of the control gate, the storage state of the single crystal of the Nand FLASH is changed from "1" to "0", and when the negative electron is removed from the floating gate, the storage state is changed from "0" to "1", and the insulator covered on the surface of the floating gate is used for trapping the negative electron in the floating gate, so as to realize data storage. That is, the Nand FLASH memory cell is a floating gate transistor, and data is stored in the form of electric charge using the floating gate transistor. The amount of charge stored is related to the magnitude of the voltage applied to the floating gate transistor.

A Nand FLASH comprises at least one Chip, each Chip is composed of a plurality of Block physical blocks, and each Block physical Block comprises a plurality of Page pages. The Block physical Block is the minimum unit of Nand FLASH for executing the erasing operation, the Page is the minimum unit of Nand FLASH for executing the reading and writing operation, and the capacity of one Nand FLASH is equal to the number of the Block physical blocks and the number of the Page pages contained in one Block physical Block. Specifically, the flash memory medium 10 may be classified into SLC, MLC, TLC and QLC according to different levels of the voltages of the memory cells.

The solid state hard disk controller 120 includes a data converter 121, a processor 122, a buffer 123, a flash memory controller 124, and an interface 125.

And a data converter 121 respectively connected to the processor 122 and the flash controller 124, wherein the data converter 121 is configured to convert binary data into hexadecimal data and convert the hexadecimal data into binary data. Specifically, when the flash memory controller 124 writes data to the flash memory medium 110, the binary data to be written is converted into hexadecimal data by the data converter 121, and then written into the flash memory medium 110. When the flash controller 124 reads data from the flash medium 110, hexadecimal data stored in the flash medium 110 is converted into binary data by the data converter 121, and then the converted data is read from the binary data page register. The data converter 121 may include a binary data register and a hexadecimal data register. The binary data register may be used to store data converted from hexadecimal to binary, and the hexadecimal data register may be used to store data converted from binary to hexadecimal.

And a processor 122 connected to the data converter 121, the buffer 123, the flash controller 124 and the interface 125, respectively, wherein the processor 122, the data converter 121, the buffer 123, the flash controller 124 and the interface 125 may be connected by a bus or other methods, and the processor is configured to run the nonvolatile software program, instructions and modules stored in the buffer 123, so as to implement any method embodiment of the present application.

The buffer 123 is mainly used for buffering read/write commands sent by the host 200 and read data or write data acquired from the flash memory 110 according to the read/write commands sent by the host 200. The buffer 123, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The buffer 123 may include a storage program area that may store an operating system, an application program required for at least one function. In addition, the buffer 123 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, the buffer 123 may optionally include memory that is remotely located from the processor 124. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The buffer 123 may be a Static Random Access Memory (SRAM), a Coupled Memory (TCM), or a Double data rate Synchronous Dynamic Random Access Memory (DDR SRAM).

A flash memory controller 124 connected to the flash memory medium 110, the data converter 121, the processor 122 and the buffer 123, for accessing the flash memory medium 110 at the back end and managing various parameters and data I/O of the flash memory medium 110; or, an interface and a protocol for providing access, implementing a corresponding SAS/SATA target protocol end or NVMe protocol end, acquiring an I/O instruction sent by the host 200, decoding, and generating an internal private data result to wait for execution; or, the core processing module is used for taking charge of the FTL (Flash translation layer).

The interface 125 is connected to the host 200, the data converter 121, the processor 122 and the buffer 123, and configured to receive data sent by the host 200, or receive data sent by the processor 122, so as to implement data transmission between the host 200 and the processor 122, where the interface 125 may be a SATA-2 interface, a SATA-3 interface, a SAS interface, a MSATA interface, a PCI-E interface, a NGFF interface, a CFast interface, a SFF-8639 interface, and an m.2NVME/SATA protocol.

Referring to fig. 2 again, fig. 2 is a schematic structural diagram of a solid state hard disk controller according to an embodiment of the present disclosure; the solid state disk controller belongs to the solid state disk.

As shown in fig. 2, the solid state hard disk controller includes: PCIe interface controller 126, DDR controller 127, NVMe interface controller 128, processor 122, peripheral module 129, datapath module 1210, and flash controller 124.

Specifically, the PCIe interface controller 126 is configured to control a PCIe communication protocol, the DDR controller 127 is configured to control a dynamic random access memory, the NVMe interface controller 128 is configured to control an NVMe communication protocol, the peripheral module 129 is configured to control other related communication protocols, and the data path module 1210 is configured to control a data path, for example: and managing a write cache, wherein the flash memory controller 124 is used for data processing of the flash memory.

The solid state disk controller 120 further includes a data converter 121, a buffer 123, an interface 125, and the like.

Specifically, the data converter 121 is connected to the processor and the flash memory controller, respectively, and is configured to convert binary data into hexadecimal data and convert the hexadecimal data into binary data. Specifically, when the flash memory controller writes data to the flash memory medium, the binary data to be written is converted into hexadecimal data by the data converter, and then the hexadecimal data is written to the flash memory medium. When the flash memory controller reads data from the flash memory medium, the hexadecimal data stored in the flash memory medium is converted into binary data through the data converter, and then the converted data is read from the binary data page register. Wherein the data converter may include a binary data register and a hexadecimal data register. The binary data register may be used to store data converted from hexadecimal to binary, and the hexadecimal data register may be used to store data converted from binary to hexadecimal.

Specifically, the processor 122 is connected to the data converter 121, the buffer 123, the flash controller 124 and the interface 125, respectively, where the processor and the data converter, the buffer, the flash controller and the interface may be connected through a bus or other means, and the processor is configured to run the nonvolatile software program, the instructions and the modules stored in the buffer, so as to implement any method embodiment of the present application.

Specifically, the buffer is mainly used for buffering read/write commands sent by the host and read data or write data acquired from the flash memory medium according to the read/write commands sent by the host. The cache, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The buffer may include a storage program area that may store an operating system, an application program required for at least one function. In addition, the buffer may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, the cache optionally includes memory that is remotely located from the processor. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The buffer may be a Static Random Access Memory (SRAM), a Coupled Memory (TCM), or a Double data rate Synchronous Dynamic Random Access Memory (DDR SRAM).

Specifically, the flash memory controller is connected to the flash memory medium, the data converter, the processor and the buffer, and is configured to access the flash memory medium at the back end and manage various parameters and data I/O of the flash memory medium; or, an interface and a protocol for providing access, implementing a corresponding SAS/SATA target protocol end or NVMe protocol end, acquiring an I/O instruction sent by a host, decoding and generating an internal private data result to wait for execution; or, the core processing module is used for taking charge of the FTL (Flash translation layer).

Specifically, the interface is connected to the host, the data converter, the processor and the buffer, and configured to receive data sent by the host, or receive data sent by the processor, so as to implement data transmission between the host and the processor, where the interface may be a SATA-2 interface, a SATA-3 interface, a SAS interface, a MSATA interface, a PCI-E interface, an NGFF interface, a CFast interface, a SFF-8639 interface, and a m.2nvme/SATA protocol.

Referring to fig. 3, fig. 3 is a schematic diagram of an L2P table according to an embodiment of the present disclosure;

as shown in fig. 3, the solid state disk maintains a mapping table from logical address (logical address) to physical address (physical address), i.e. an L2P table (L2 table), and records the mapping relationship between the logical address and the physical address when the SSD writes each user data into the flash memory address space. When the host wants to read the data, the SSD searches the physical page corresponding to the logical page in the mapping table, and then accesses the Flash to read the data and returns the data to the user.

However, the L2P table approach is a centralized idea, and has the disadvantages that: maintenance of the L2P table requires higher space costs; when the traffic is large, the L2P table needs to be accessed and updated intensively, and when the solid state disk is in normal operation, the L2P table is resident in the DRAM, and the space of the DRAM of the whole L2P table needs to be occupied. Once the solid state disk is abnormal, if the capacity is large, a long time is needed for recovering the L2P table; in extreme cases, if the L2P table cannot be recovered, it may even result in user data loss.

Based on this, the embodiment of the present application provides an address mapping method to solve the problem that the existing L2P table occupies a large space.

Referring to fig. 4, fig. 4 is a schematic flowchart illustrating an address mapping method according to an embodiment of the present disclosure;

as shown in fig. 4, the address mapping method includes:

step S401: dividing super blocks in the solid state disk into a plurality of service super block groups and a hot standby super block group;

specifically, the SSD system divides the NAND particles into a plurality of physical blocks with equal size, and distributes the NAND particles evenly in a packet (planet) structure, and assuming that the SSD has N planet, for convenience of management, the system selects one physical Block from each planet to form a Super Block (SBLK) with larger capacity, and converts all operations (such as read/write erase) on the Block into operations on the Super Block (SBLK).

Dividing all superblocks in the solid state disk into a plurality of service superblock groups (SBLK groups) and a hot spare superblock group (hot spare SBLK group), wherein each service superblock group comprises a plurality of service superblocks, and the hot spare superblock group comprises a plurality of hot spare superblocks.

Specifically, the dividing the super blocks in the solid state disk into a plurality of service super block groups includes:

if the solid state disk is used for the first time, sequentially selecting a preset number of service super blocks to determine each service super block group, wherein each service super block group comprises a plurality of service super blocks;

and if the solid state disk is used again after being formatted, determining each service super block group according to the erasing times of each service super block.

For example: if the solid state disk is a new disk, sequentially selecting a preset number of service super blocks to generate each service super block group, wherein each service super block group comprises the same number of service super blocks; and if the solid state disk is used again after being formatted, acquiring the erasing times (PE cycles) of each service super block of all the service super block groups to determine each service super block group.

Specifically, the determining each service super block group according to the erasing and writing times of each service super block includes:

and selecting a plurality of service superblocks to form a service superblock group according to the erasing times of each service superblock, so that the difference of the average erasing times of different service superblocks is smaller than a preset difference threshold. Specifically, the average erasing count of each service super block group is the number of the erasing counts of all service super blocks in the service super block group and/or the number of the service super blocks in the service super block group, and the preset difference threshold may be set according to specific needs, for example: set to half the difference between the maximum average erase count and the minimum average erase count in all service superblocks, such as: the average erasing times of each service super block group and the average erasing times of other service super block groups are both smaller than half of the difference value between the maximum average erasing times and the minimum average erasing times in all service super blocks.

For example: sequencing all service super blocks according to the sequence of average erasing times from low to high, for example: the average number of erasures for a service superblock is ranked 11, 11, 12,12, 13, 13. The average value of all service superblocks is calculated as (11+11+12+12+13+ 13)/5-12, and then each service superblock group is selected as close as possible to the average value of each service superblock group, for example: and dividing the service super block into 3 service super block groups which are respectively corresponding to (11,13), (11,13) and (12, 12). At this time, the maximum average erasing times in all the service super blocks is 13, the minimum average erasing times is 11, half of the difference between the maximum average erasing times and the minimum average erasing times is (13-11)/2 is 1, that is, a preset difference threshold is set to be 1, the difference between the average erasing times of the three service super block groups is smaller than 1, and the difference between the average erasing times of the different service super block groups is smaller than the preset difference threshold.

In the embodiment of the application, by determining a plurality of service super block groups, the difference value of the average erasing times of different service super block groups is smaller than the preset difference threshold value, so that the erasing times of different service super block groups are similar, which is beneficial to realizing the similar service lives of the service super block groups.

It can be understood that the sum of the number of the service superblock and the hot standby superblock is not greater than the number of all superblocks in the solid state disk, and preferably, the sum of the number of the service superblock and the hot standby superblock is equal to the number of all superblocks in the solid state disk.

Step S402: establishing a mapping relation between an address field in each logical address field and each service super block in the service super block group, and establishing a mapping table of a logical address and a physical address of each service super block;

specifically, referring to fig. 5 again, fig. 5 is a detailed flowchart of step S402 in fig. 4;

as shown in fig. 5, the step S402: establishing a mapping relation between an address field in each logical address field and each service superblock in a service superblock group, comprising:

step S4021: segmenting the logic address of the solid state disk, and determining a plurality of logic address segments;

it can be understood that, after reporting the capacity of the host itself, since the capacity of the solid state disk is fixed, the Logical Block Address (LBA) that the host can operate is also fixed.

Referring to fig. 6, fig. 6 is a schematic diagram illustrating a logical address and service super block set according to an embodiment of the present application;

as shown in fig. 6, assuming that the logical addresses of the solid state disk are logic addr 0 to logic addr z, all the logical addresses of the solid state disk are divided into a plurality of logical address segments, for example: logic addr 0 to logic addr h are one logical address segment, and so on, each logical address segment includes the same number of logical addresses to determine a plurality of logical address segments.

Step S4022: establishing a mapping relation between each logic address field and each service super block group;

specifically, the number of the logical address segments is the same as the number of the service super block groups, and each logical address segment corresponds to one service super block group one by one to establish a mapping relationship between each logical address segment and each service super block group, as shown in fig. 6, SBLK group 0 corresponds to logical address segments 0 to logic addr z.

Step S4023: dividing each logic address segment into a plurality of address segments, and establishing the mapping relation between each address segment and each super block in the service super block group.

Specifically, in the service super block set, it is necessary to further determine to write data into a corresponding super block through a logical address, so that each logical address segment is divided into a plurality of address segments, and a mapping relationship between each address segment and each super block in the service super block set is established.

In the embodiment of the application, the mapping relationship between each address field and each super block in the service super block group is calculated by a preset algorithm of firmware of the solid state disk, and once the preset algorithm is determined, the corresponding relationship between the logical address field and the super block is determined by a rule determined by the preset algorithm. It will be appreciated that the predetermined algorithm determines the mapping relationship of each address field to each superblock in the set of service superblocks based on the number of SBLKs and the amount of data stored by each SBLK.

Referring to fig. 7, fig. 7 is a schematic diagram illustrating a mapping relationship between address segments and super blocks according to an embodiment of the present disclosure;

as shown in fig. 7, the SBLK group includes N SBLKs, which are SBLK 0 th to SBLK N-1 th, and each SBLK corresponds to an address segment, where one address segment corresponds to one super block, for example: the address field 0 corresponds to SBLKa, and after the address field corresponds to a service superblock, a mapping table of a logical address and a physical address of each service superblock is established, that is, an L2P table corresponding to each service superblock.

Step S403: when the address field corresponding to a certain service super block is repeatedly and sequentially written, establishing the mapping relation between the service super block and a hot standby super block in the hot standby super block group, and refreshing the mapping table of the logical address and the physical address of the service super block.

Specifically, if the host writes to a certain area repeatedly and sequentially, that is, the host writes to a certain address field repeatedly and sequentially, a mapping relationship between the service superblock and the hot standby superblock needs to be established, and one or more hot standby superblocks are selected from the hot standby superblock group to establish a mapping relationship (SBLKmaptable) with the service superblock currently needing to be written repeatedly and sequentially.

Referring to fig. 8 again, fig. 8 is a schematic diagram of a mapping relationship between a service superblock and a hot standby superblock according to an embodiment of the present application;

as shown in fig. 8, when the host repeatedly and sequentially writes to the address field 0, a service superblock SBLKa and a hot standby superblock hotspot esblk 1 corresponding to the address field 0 are established;

further, when the host repeatedly and sequentially writes the address field # 1, if the hot spare super block hot spare SBLK 2 is fully written, the hot spare super block hot spare SBLK 3 is further searched from the hot spare super block set to establish a mapping relationship between the service super block SBLK b and the hot spare super block hot spare SBLK 2 and the hot spare super block hot spare SBLK 3, data is written into the hot spare super block hot spare SBLK 3, and then a mapping table of a logical address and a physical address of the service super block SBLK b, namely an L2P table corresponding to the service super block, is refreshed.

In this embodiment of the present application, when an address field corresponding to a certain service super block is written in sequence repeatedly, a hot standby super block is selected from a hot standby super block group, and host data is written into the hot standby super block, where the selection manner of the hot standby super block may be to select a super block with the smallest erasing and writing frequency from the hot standby super block group, or may also be to select a hot standby super block with erasing and writing frequency close to that of an existing service super block, and may be specifically defined by a firmware vendor.

Specifically, please refer to fig. 9, where fig. 9 is a schematic diagram of a mapping relationship between another service super block and a hot standby super block according to an embodiment of the present application;

as shown in FIG. 9, one SBLK in the SBLK group 0 corresponds to one SBLK in the hotspot SBLK group, and when the address field corresponding to the SBLK in the SBLK group 0 is repeatedly written sequentially, the host data is written to the SBLK in the hot spare SBLK group.

In this embodiment of the present application, if a certain hot standby super block is fully written, a new hot standby super block is obtained from the hot standby super block group, and the mapping relationship between the service super block and the hot standby super block in the hot standby super block group is updated.

As shown in fig. 8, if the hot spare super block hot spare SBLK 2 is fully written, a hot spare SBLK 3 is obtained from the hot spare super block set, the host data is written into the new hot spare super block, and the mapping relationship between the service super block and the hot spare super block in the hot spare super block set is updated, that is, the service super block SBLK a is mapped to the hot spare SBLK 2 and the hot spare SBLK 3, and the hot spare SBLK 2 is mapped to the hot spare SBLK 3.

In an embodiment of the present application, the method further includes:

when invalid data in a certain service super block is larger than a preset threshold value, moving the valid data in the service super block to a corresponding hot standby super block, and erasing the service super block;

and adding the hot standby super block corresponding to the service super block into the service super block group corresponding to the service super block, and adding the erased service super block into the hot standby super block group.

Specifically, the preset threshold is a preset proportion threshold, and by calculating the proportion of invalid data to all data in the service super block, if the proportion of the invalid data to all data in the service super block is greater than the preset threshold, the valid data in the service super block is moved to the corresponding hot standby super block, and the service super block is erased; and then, adding the hot standby superblock corresponding to the service superblock into the service superblock group corresponding to the service superblock, and adding the erased service superblock into the hot standby superblock group, which is equivalent to converting the hot standby superblock into the service superblock and converting the hot standby superblock into the service superblock, thereby realizing the conversion between the hot standby superblock and the service superblock.

In this embodiment of the present application, after adding the hot standby superblock corresponding to the service superblock group corresponding to the service superblock, and adding the erased service superblock to the hot standby superblock group, the mapping relationship is updated, for example: and the new service super block corresponds to one hot standby super block in the hot standby super block group.

In an embodiment of the present application, the method further includes:

when a bad block appears in a certain service super block, selecting a physical block from the hot standby super block group as a replacement block of the bad block, writing effective data of the bad block into the replacement block, and updating a remapping table; wherein the remapping table stores a mapping of bad blocks to replacement blocks.

Specifically, when a bad block is found on a grain (Die), the user writes data not across the grain (Die) but on a replacement block. With replacement policy, a remapping Table (Remap Table), i.e., the mapping of bad blocks to replacement blocks, needs to be maintained inside the SSD. When the SSD needs to access Block B, it needs to look up the remapping table, and the physical Block actually accessed should be B'.

In this embodiment of the present application, before selecting a physical block from the hot spare super block set as a replacement block of the bad block, the method further includes: reserving a hot standby superblock from the hot standby superblock group as a replacement block of a bad block;

the selecting a physical block from the hot standby super block group as a replacement block of the bad block comprises the following steps:

and selecting one physical block in the reserved hot standby super block as a replacement block of the bad block, wherein the selected physical block and the bad block are positioned in the same crystal grain.

Specifically, a hot standby superblock is selected and reserved as a replacement block of a bad block of all service superblock groups, and if the solid state disk is a new disk, a hot standby superblock is randomly selected as a replacement block of the bad block; and if the solid state disk is a used solid state disk, selecting a hot standby super block with the least erasing times as a replacement block of the bad block, and selecting a physical block of the hot standby super block and the bad block on the same crystal grain (die) to better maintain concurrency.

In the embodiment of the application, the superblock is used as the granularity, and the mapping table of the logical address and the physical address is established, so that corresponding data can be acquired according to the mapping table. Because the method is in a sequential writing and reading scene, a small amount of L2P tables can be loaded in the DRAM space in a prefetching mode and the like, so that the occupation of the DRAM space is reduced, for example, when an application program reads the address section b, mapping tables corresponding to the address sections c and d can be led into the DRAM space in advance; when the address field c is read, the address field b does not need to be reserved in the memory, and the mapping table of the e address field can be read, so that the occupation of the mapping table on the DRAM space can be greatly reduced, the recovery time of the mapping table can be further reduced, and the reliability of the solid state disk is improved.

Referring to fig. 10 again, fig. 10 is a schematic flowchart illustrating another address mapping method according to an embodiment of the present disclosure;

as shown in fig. 10, the address mapping method includes:

starting;

step S101: obtaining an IO mode of a solid state disk;

step S102: judging whether the IO mode is sequential writing;

specifically, if the IO mode is sequential write, a first address mapping mode is entered; if the IO mode is not sequential writing, entering a second address mapping mode;

step S103: entering a first address mapping mode;

specifically, the first address mapping mode is a service super block set mode and a hot standby super block set mode, where the first address mapping mode includes: dividing super blocks in a solid state disk into a plurality of service super block groups and a hot standby super block group, wherein each service super block group comprises a plurality of service super blocks, and each hot standby super block group comprises a plurality of hot standby super blocks; establishing a mapping relation between an address field in each logical address field and each super block in a service super block group, and establishing a mapping table of a logical address and a physical address of each service super block; when the address field corresponding to a certain service super block is repeatedly and sequentially written, establishing a mapping relation between the service super block and a hot standby super block in a hot standby super block group, and refreshing a mapping table of a logical address and a physical address of the service super block;

step S104: entering a second address mapping mode;

specifically, the second address mapping mode is a global L2P table mode, where the second address mapping mode includes: a global mapping table, i.e., a full L2P table, is set, and the global mapping table is used for determining the mapping relationship between the logical addresses and the physical addresses.

In the embodiment of the application, the L2P table with SBLK size as granularity is smaller and lighter to maintain. In order to maintain good performance, at present, the full amount of L2P tables are usually stored in the DRAM of the solid state disk, and the space occupied by the DRAM is large. This application can reduce the occupation to DRAM space through loading a small amount of L2P table in DRAM space, has promoted the reliability.

Compared with the existing global L2P table, if the global L2P table has a problem (for example, software bug causes the L2P table to be stepped on), the whole disk data is lost. Meanwhile, when L2P needs to be recovered due to an abnormality, it is a long process (for example, when the L2P is not updated in time due to abnormal power failure during host write-intensive traffic, it takes a long time for large-capacity disk power-on recovery (through P2L)). And when the SBLK size is taken as the granularity, the L2P table which is currently written is updated, the table becomes small and the probability of damage is reduced, meanwhile, the abnormal recovery time is also shortened, and the reliability of the solid state disk is improved.

In an embodiment of the present application, there is provided an address mapping method, including: dividing super blocks in a solid state disk into a plurality of service super block groups and a hot standby super block group, wherein each service super block group comprises a plurality of service super blocks, and each hot standby super block group comprises a plurality of hot standby super blocks; establishing a mapping relation between an address field in each logical address field and each service super block in the service super block group, and establishing a mapping table of a logical address and a physical address of each service super block; when the address field corresponding to a certain service super block is repeatedly and sequentially written, establishing a mapping relation between the service super block and a hot standby super block in a hot standby super block group, and refreshing a mapping table of a logical address and a physical address of the service super block.

Embodiments of the present application also provide a non-volatile computer storage medium storing computer-executable instructions, which are executed by one or more processors, for example, the one or more processors may execute the address mapping method in any of the above method embodiments, for example, execute the above described steps.

The above-described embodiments of the apparatus or device are merely illustrative, wherein the unit modules described as separate parts may or may not be physically separate, and the parts displayed as module units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network module units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.

Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. Based on such understanding, the technical solutions mentioned above may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute the method according to each embodiment or some parts of the embodiments.

Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; within the context of the present application, where technical features in the above embodiments or in different embodiments can also be combined, the steps can be implemented in any order and there are many other variations of the different aspects of the present application as described above, which are not provided in detail for the sake of brevity; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:HMB的表项管理方法及固态硬盘的控制系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类