Method for acquiring control parameters in solid state disk, storage medium and electronic device

文档序号:923461 发布日期:2021-03-02 浏览:2次 中文

阅读说明:本技术 固态硬盘中控制参数的获取方法、存储介质和电子装置 (Method for acquiring control parameters in solid state disk, storage medium and electronic device ) 是由 李伟 袁伟 于 2020-12-15 设计创作,主要内容包括:本申请实施例公开了一种固态硬盘中控制参数的获取方法、存储介质和电子装置。所述方法包括:获取固态硬盘的运行数据,其中所述运行数据包括不同时刻的运行状态下前端指令迟滞因子;根据所述运行数据,采用强化学习建立运行状态与前端指令迟滞因子之间的非线性模型;利用所述非线性模型确定当前运行状态下的目标前端指令迟滞因子,其中所述目标前端指令迟滞因子能够达到当前运行状态下最优的每秒的输入输出量IOPS。本申请实施例采用强化学习这一非线性算法选取前端指令迟滞因子,使得性能IOPS值趋于最大化,且性能波动趋于最小化。(The embodiment of the application discloses a method for acquiring control parameters in a solid state disk, a storage medium and an electronic device. The method comprises the following steps: acquiring operation data of the solid state disk, wherein the operation data comprises front-end instruction hysteresis factors under operation states at different moments; establishing a nonlinear model between the operation state and a front-end command hysteresis factor by adopting reinforcement learning according to the operation data; and determining a target front-end command hysteresis factor in the current operation state by using the nonlinear model, wherein the target front-end command hysteresis factor can reach the optimal input and output quantity per second IOPS in the current operation state. According to the embodiment of the application, a nonlinear algorithm of reinforcement learning is adopted to select the hysteresis factor of the front-end instruction, so that the performance IOPS value tends to be maximized, and the performance fluctuation tends to be minimized.)

1. A method for acquiring control parameters in a solid state disk comprises the following steps:

acquiring operation data of the solid state disk, wherein the operation data comprises front-end instruction hysteresis factors under operation states at different moments;

establishing a nonlinear model between the operation state and a front-end command hysteresis factor by adopting reinforcement learning according to the operation data;

and determining a target front-end command hysteresis factor in the current operation state by using the nonlinear model, wherein the target front-end command hysteresis factor can reach the optimal input and output quantity per second IOPS in the current operation state.

2. The method of claim 1, wherein:

the operation state comprises at least two groups of operation information, each group of operation information comprises a state parameter and an action parameter, the state parameter comprises at least two operation conditions, each operation condition corresponds to an event which can cause storage delay, the events in any two operation conditions are different, and the action parameter is a front-end instruction hysteresis factor corresponding to the state parameter;

establishing a nonlinear model between an operation state and a front-end command hysteresis factor by adopting reinforcement learning according to the operation data, wherein the nonlinear model comprises the following steps:

and taking the event in each group of operation information as an input variable, taking the front-end instruction hysteresis factor in each group of operation information as an output variable, and establishing a nonlinear model between the operation state and the front-end instruction hysteresis factor by adopting reinforcement learning.

3. The method of claim 2, wherein:

the event comprises at least one of a garbage collection event, an event that an on-board temperature exceeds a preset temperature threshold, and a performance fluctuation event caused by a read error.

4. The method according to claim 2 or 3, wherein the establishing a non-linear model between the operation state and the front-end command hysteresis factor by using reinforcement learning with the event in each set of operation information as an input variable and the front-end command hysteresis factor in each set of operation information as an output variable comprises:

determining a storage delay caused by each operating condition in each set of operating information;

for the same group of operation information, determining the weight of each operation condition according to the storage delay caused by each operation condition in the operation information and a front-end instruction hysteresis factor in an action parameter in the operation information;

and establishing a nonlinear model between the operation state and the front-end command hysteresis factor by using the weight of each operation condition and adopting reinforcement learning.

5. The method of claim 4, further comprising:

after the weight of each operating condition is determined, selecting events corresponding to the operating conditions, and selecting the events with the weights larger than a preset weight threshold value to represent the operating state.

6. The method of claim 4, wherein the non-linear model is obtained by:

establishing a reinforcement learning model by utilizing the at least two pieces of operation information;

performing iterative loop on the reinforcement Learning model by adopting a Q-Learning algorithm to obtain a training result;

and adjusting the reinforcement learning model according to the training result to obtain the nonlinear model.

7. The method of claim 6, wherein the reward R of the reinforcement learning model is obtained by:

R=r0–ζeE–δζdD,

wherein, E is the delay caused by each operating condition, D is the difference between the previous moment and the IOPS at this moment, ζ E represents the weight of the delay parameter, ζ D represents the weight of the IOPS, δ is used for mapping the delay parameter and the IOPS to the range of the same order of magnitude;

wherein r0 is a positive number, and the value range thereof satisfies the following conditions:

max(ζeE+δζdD)<r0<2*max(ζeE+δζdD)。

8. the method of claim 6,

updating the function Q (s, a) by adopting the value of the state-action in the Q-Learning algorithm, wherein the updating process comprises the following steps:

Qt+1(s,a)=(1-α)Qt(s,a)+α[Rt+γmaxQt(s!,b)],

where s represents the current state, a is the action in state s, s! Represents the state after the state transition, and b represents the state s! And the action below, wherein Qt and Qt +1 respectively represent Q values before and after updating, alpha represents the learning rate of the reinforcement learning model, Rt represents R obtained by current iteration, gamma is a discount factor which represents the importance degree of future reward, and the value is less than 1.

9. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 8 when executed.

10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 8.

Technical Field

The embodiment of the application relates to the field of solid state disks, and in particular, to a method for acquiring control parameters in a solid state disk, a storage medium and an electronic device.

Background

With the continuous explosion of mass data, the original mainstream storage medium is HDD (Hard Disk Drive), and it is increasingly difficult to meet the performance requirement of the user at the acceptable cost of the user, so another storage medium is now available, that is, SSD (Solid State Disk or Solid State Drive), and the Solid State Disk can be divided into a consumer-level Solid State Disk and an enterprise-level Solid State Disk according to the difference of the application market.

Currently, more and more users have recognized the performance advantages brought by SSDs and have begun to accept letting SSDs enter data centers. However, the application of this product in an enterprise-level environment is not as simple as imaginable, every new thing always encounters various difficulties, and the SSD is no exception. For the traditional HDD, the technology, the process and the application mode are all very mature, and the application is relatively simple and clear. However, unlike HDDs, which have relatively high performance, the performance of HDDs is relatively predictable, and solid state disks using NAND flash as a medium have higher performance, but may cause some unusual extra delay in flash reading due to the characteristics of slower writing speed than reading, erasing before writing, and the like. There are many judgment loop structures in the firmware program of the SSD, and these extra storage delays will affect the execution efficiency of the program, and the experience presented to the user is the fluctuation of performance, for the existing enterprise-level SSD firmware program, the operation efficiency is evaluated by the average delay or depends on the slowest link in the whole architecture as the basis for delaying the front-end instruction processing.

In practical applications, the related art is limited in the size of the sampled data volume, which will reduce the number of the enterprise-level SSD IOPS ((Input/Output Per Second), Input/Output), or cannot maximize the performance IOPS while ensuring the performance stability requirement.

Disclosure of Invention

In order to solve any one of the above technical problems, embodiments of the present application provide a method for acquiring a control parameter in a solid state disk, a storage medium, and an electronic apparatus.

In order to achieve the purpose of the embodiment of the present application, an embodiment of the present application provides a method for obtaining a control parameter in a solid state disk, including:

acquiring operation data of the solid state disk, wherein the operation data comprises front-end instruction hysteresis factors under operation states at different moments;

establishing a nonlinear model between the operation state and a front-end command hysteresis factor by adopting reinforcement learning according to the operation data;

and determining a target front-end command hysteresis factor in the current running state by using the nonlinear model, wherein the target front-end command hysteresis factor can reach the optimal IOPS in the current running state.

A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method as described above when executed.

An electronic device comprising a memory having a computer program stored therein and a processor arranged to execute the computer program to perform the method as described above.

One of the above technical solutions has the following advantages or beneficial effects:

by acquiring the operation data of the solid state disk, establishing a nonlinear model between an operation state and a front-end instruction hysteresis factor by adopting reinforcement learning according to the operation data, and determining a target front-end instruction hysteresis factor in the current operation state by utilizing the nonlinear model, so that the corresponding relation between the operation state and the front-end instruction hysteresis factor is more accurately recorded, and a data basis is provided for accurately predicting the numerical value of the front-end instruction hysteresis factor subsequently; and (3) selecting a front-end command hysteresis factor by adopting a nonlinear algorithm of reinforcement learning, so that the performance IOPS value tends to be maximized and the performance fluctuation tends to be minimized.

Additional features and advantages of the embodiments of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the application. The objectives and other advantages of the embodiments of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.

Drawings

The accompanying drawings are included to provide a further understanding of the embodiments of the present application and are incorporated in and constitute a part of this specification, illustrate embodiments of the present application and together with the examples of the embodiments of the present application do not constitute a limitation of the embodiments of the present application.

Fig. 1 is a flowchart of a method for acquiring a control parameter in a solid state disk according to an embodiment of the present application;

fig. 2 is a flowchart of a learning-based hysteresis determination method of an SSD according to an embodiment of the present application.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present application more apparent, the embodiments of the present application will be described in detail below with reference to the accompanying drawings. It should be noted that, in the embodiments of the present application, features in the embodiments and the examples may be arbitrarily combined with each other without conflict.

Fig. 1 is a flowchart of a method for acquiring a control parameter in a solid state disk according to an embodiment of the present application. As shown in fig. 1, the method includes:

101, acquiring operation data of a solid state disk, wherein the operation data comprises front-end instruction hysteresis factors under operation states at different moments;

102, establishing a nonlinear model between an operation state and a front-end command hysteresis factor by adopting reinforcement learning according to the operation data;

and 103, determining a target front-end command hysteresis factor in the current operation state by using the nonlinear model, wherein the target front-end command hysteresis factor can reach the optimal IOPS in the current operation state.

If a delay occurs during the running of the firmware in the SSD, in order to ensure that the execution time of the front-end single cmd tends to be smooth, the execution time of the front-end single cmd must be slowed down to wait for the firmware.

In the related art, the delay time generated by the firmware is converted into the delay time delayed by the front end cmd through a linear calculation expression, for example, y ═ kx, where k is the delay factor, y is the hysterisis value, and x is the delay time. Different from the related art, the method provided by the embodiment of the application establishes the nonlinear model according to the operation state and the front-end command hysteresis factor, so that the corresponding relation between the operation state and the front-end command hysteresis factor is more accurately recorded, and a data basis is provided for accurately predicting the numerical value of the front-end command hysteresis factor subsequently.

According to the method provided by the embodiment of the application, the operation data of the solid state disk is obtained, the nonlinear model between the operation state and the front-end instruction hysteresis factor is established by adopting reinforcement learning according to the operation data, the target front-end instruction hysteresis factor in the current operation state is determined by utilizing the nonlinear model, so that the corresponding relation between the operation state and the front-end instruction hysteresis factor is more accurately recorded, a data basis is provided for accurately predicting the value of the front-end instruction hysteresis factor subsequently, the front-end instruction hysteresis factor is selected by adopting the nonlinear algorithm of reinforcement learning, the IOPS value of the performance tends to be maximized, and the performance fluctuation tends to be minimized.

The method provided by the embodiments of the present application is explained as follows:

in an exemplary embodiment, the operating state includes at least two sets of operating information, each set of operating information includes a state parameter and an action parameter, where the state parameter includes at least two operating conditions, where each operating condition corresponds to an event that can cause a storage delay, and the events in any two operating conditions are different, where the action parameter is a front-end command hysteresis factor corresponding to the state parameter;

establishing a nonlinear model between an operation state and a front-end command hysteresis factor by adopting reinforcement learning according to the operation data, wherein the nonlinear model comprises the following steps:

and taking the event in each group of operation information as an input variable, taking the front-end instruction hysteresis factor in each group of operation information as an output variable, and establishing a nonlinear model between the operation state and the front-end instruction hysteresis factor by adopting reinforcement learning.

Unlike the related art in which the overall delay caused by the event of the firmware is used as an input variable, the embodiment of the present application uses the event occurring in the firmware as an input variable, so as to more accurately record the cause of the storage delay, thereby ensuring that the prediction of the front-end command hysteresis factor is performed by using more accurate event information.

In one exemplary embodiment, the event includes at least one of a garbage collection event, an event that an onboard temperature exceeds a preset temperature threshold, and a performance fluctuation event caused by a read error.

In practical application, the events causing the storage delay may be different according to different manufacturers of the SSD, or may be set according to actual needs,

In an exemplary embodiment, the establishing a non-linear model between the operating state and the front-end command hysteresis factor by using reinforcement learning with the event in each set of operating information as an input variable and the front-end command hysteresis factor in each set of operating information as an output variable includes:

determining a storage delay caused by each operating condition in each set of operating information;

for the same group of operation information, determining the weight of each operation condition according to the storage delay caused by each operation condition in the operation information and a front-end instruction hysteresis factor in an action parameter in the operation information;

and establishing a nonlinear model between the operation state and the front-end command hysteresis factor by using the weight of each operation condition and adopting reinforcement learning.

Because at least two events all cause storage delay at a certain moment, firstly determining delay time caused by a single event, then integrating the delay time caused by each event, and determining the influence degree of each event on the delay factor of the front-end instruction by combining the numerical value of the front-end instruction delay factor under the running information so as to more accurately determine the importance of each event on the storage delay.

In one exemplary embodiment, the method further comprises:

after the weight of each operating condition is determined, selecting events corresponding to the operating conditions, and selecting the events with the weights larger than a preset weight threshold value to represent the operating state.

And based on the obtained weight, keeping the event with the large weight as a state condition of subsequent training so as to improve the training efficiency of subsequent training operation.

In an exemplary embodiment, the nonlinear model is obtained by:

establishing a reinforcement learning model by utilizing the at least two pieces of operation information;

performing iterative loop on the reinforcement Learning model by adopting a Q-Learning algorithm to obtain a training result;

and adjusting the reinforcement learning model according to the training result to obtain the nonlinear model.

In the foregoing exemplary embodiment, the present application provides a hysteris (Hysteresis) scheme based on a learned SSD, which may create a reinforcement Learning model in advance, determine an action set, a state set, and a reward value of the reinforcement Learning model, calculate the reinforcement Learning model by using a Q-Learning algorithm, perform an iterative loop on the reinforcement Learning model by using a hyper-parameter in the Q-Learning algorithm, adjust the reinforcement Learning model according to a training result, and perform a hysteris decision by using the adjusted reinforcement Learning model.

The method accurately predicts the future delay parameters, the future delay rules, the future frequency and the like in a machine learning mode, and is used for solving the problems that the prior technical scheme is limited by the size of the sampled data volume, the enterprise-level SSD IOPS value can be reduced, or the IOPS can not be expressed to the maximum extent on the premise of ensuring the performance stability requirement, wherein the IOPS represents the data throughput in unit time.

In the above exemplary embodiment, the selection of the hysteris value is defined as an action a in reinforcement learning, and the determination of the action value can be understood as that the delay time caused by each input state is calculated as hysteris time through the delay factor;

factors that contribute to the delay in SSD performance mainly include:

recovering the garbage; performance degradation due to idle block depletion;

extreme conditions, such as performance degradation due to excessive on-board temperature and performance fluctuations due to the generation of read error.

Thus, the state input variables include: the temperature t at the current moment, and the time delay e caused by the reading error at the current moment (the times of reading error can be stored in a flash memory as a log by firmware, and then is converted into a time delay unit by a time delay factor set by the firmware), and the time delay g caused by the current data gc (garbage collection).

Fig. 2 is a flowchart of a learning-based hysteresis determination method of an SSD according to an embodiment of the present application. As shown in fig. 2, the method includes:

step 201, constructing a reinforcement learning model, and determining an action set, a state set and an incentive value of the reinforcement learning model;

action a ∈ a { hysteris 0, hysteris 1, hysteris 2, … }, where a is the set of actions;

state S ∈ S { t1, e1, g1, t2, e2, g2, … }, where S is a set of states;

the delay caused by various delay conditions in a single time needs to be calculated, then the delay time and the IOPS calculated according to each delay time are integrated to be used as the reward of reinforcement learning, a multi-view problem is converted into a single-view problem, and parameters are adjusted to complete the solution of an optimized value.

Step 202, determining an award value according to the state input variable and the action output variable;

the reward R of the reinforcement learning model is obtained by the following method:

R=r0–ζeE–δζdD,

wherein, E is the delay caused by each operating condition, D is the difference between the previous moment and the IOPS at this moment, ζ E represents the weight of the delay parameter, ζ D represents the weight of the IOPS, δ is used for mapping the delay parameter and the IOPS to the range of the same order of magnitude;

wherein r0 is a positive number, and the value range thereof satisfies the following conditions:

max(ζeE+δζdD)<r0<2*max(ζeE+δζdD)。

because the firmware strategies of each chip designer are different, the types of the delay parameters required to be brought in are also different, and what is needed is to substitute different delay parameters and corresponding IOPS values into the reward function for calculation, so that the reward tends to be optimized.

Step 203, updating an action selection strategy;

the smaller the difference between the IOPS at the front and rear moments, the more stable the performance.

The action selection strategy is to determine the action in the state, and store the Q value of each time by constructing a Q-table, so as to select the action which can obtain the maximum benefit and realize the selection of the hysteresis factor of the front-end instruction.

Substituting each set of state values into the calculated reward value and updating the value function Q (s, a);

updating the function Q (s, a) by adopting the value of the state-action in the Q-Learning algorithm, wherein the updating process comprises the following steps:

Qt+1(s,a)=(1-α)Qt(s,a)+α[Rt+γmaxQt(s!,b)],

where s represents the current state, a is the action in state s, s! Represents the state after the state transition, and b represents the state s! And the action below, wherein Qt and Qt +1 respectively represent Q values before and after updating, alpha represents the learning rate of the reinforcement learning model, Rt represents R obtained by current iteration, gamma is a discount factor which represents the importance degree of future reward, and the value is less than 1.

Step 204, calculating a new reward value based on the updated action selection strategy, and updating the Q value;

step 205, judging whether the maximum iteration number is reached;

if the set maximum iteration number is not reached, continuing to execute the steps 203 to 205;

if the set maximum number of iterations is reached, step 206 is performed.

Step 206, judging whether the reinforcement learning model is converged;

if the reinforcement learning model is not converged, adjusting the learning rate and carrying out iterative calculation again;

if the reinforcement learning model converges, the process ends.

According to the method provided by the embodiment of the application, a reinforcement learning model is constructed in the use process of the SSD by a reinforcement learning method, and continuous optimization can be realized, so that the dynamic optimization of the storage performance consistency of the enterprise-level SSD can be realized; the delay parameter and the IOPS calculated each time are brought into a target of reinforcement learning, so that the system tends to optimize performance while ensuring stable performance, and the problem of performance fluctuation of the enterprise-level SSD caused by delay is solved; in addition, the selection of the delay parameters can be automatically completed, and the adaptability is strong. .

An embodiment of the present application provides a storage medium, in which a computer program is stored, wherein the computer program is configured to perform the method described in any one of the above when the computer program runs.

An embodiment of the application provides an electronic device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the method described in any one of the above.

It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:隐式目录状态更新

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类