Assembly line model parallel training memory optimization method based on characteristic diagram coding

文档序号:1964398 发布日期:2021-12-14 浏览:5次 中文

阅读说明:本技术 一种基于特征图编码的流水线模型并行训练内存优化方法 (Assembly line model parallel training memory optimization method based on characteristic diagram coding ) 是由 毛莺池 金衍 屠子健 聂华 黄建新 徐淑芳 王龙宝 于 2021-08-26 设计创作,主要内容包括:本发明公开了一种基于特征图编码的流水线模型并行训练内存优化方法,步骤为:构建流水线DNN模型并行训练方案,采用异步参数更新方法,并发执行不同批次的训练,记录训练批次在单位流水线执行时间内完成前向和后向传递过程;在模型训练过程中,待前向传递计算任务完成后,将生成的特征图进行编码,以低内存占用格式存储,从而降低特征图存储所需内存占用量;后向传递过程计算时,将保存特征图进行解码,还原高精度原始数据,实现基于特征图编码的流水线并行训练内存优化,避免低精度数据对模型训练计算影响,保证模型训练有效性。(The invention discloses a method for optimizing a parallel training memory of a pipeline model based on characteristic diagram coding, which comprises the following steps: constructing a parallel training scheme of a production line DNN model, adopting an asynchronous parameter updating method, executing different batches of training concurrently, and recording that the training batches complete forward and backward transfer processes within unit production line execution time; in the model training process, after the forward transfer calculation task is completed, the generated feature graph is coded and stored in a low memory occupation format, so that the memory occupation amount required by feature graph storage is reduced; when calculating the backward transfer process, the stored feature graph is decoded, high-precision original data is restored, the optimization of the streamline parallel training memory based on the feature graph coding is realized, the influence of low-precision data on the model training calculation is avoided, and the effectiveness of the model training is ensured.)

1. A method for optimizing a memory of a pipeline model parallel training based on feature map coding is characterized by comprising the following steps:

(1) constructing a parallel training scheme of a pipeline DNN model, adopting an asynchronous parameter updating method, executing different batches of training in different nodes concurrently, and recording that each training batch completes the forward and backward transfer processes within the execution time of a unit pipeline;

(2) and after the forward transfer computing task is completed, generating a characteristic diagram. If the feature map is generated by the Relu-Pooling or Relu-Conv combination layer, encoding the feature map; if the feature map is not generated by the Relu-Pooling or Relu-Conv combination layer, the encoding operation is not carried out;

(3) judging whether the generated feature maps are all encoded and stored in a low memory occupation format, so that the memory occupation amount required by feature map storage is reduced, if so, encoding the feature maps is finished, otherwise, returning to the step (2) to continue iteration;

(4) and decoding the generated feature map when calculating in a backward transfer process. If the feature map is generated by the Relu-Pooling or Relu-Conv combination layer, decoding the feature map; if the characteristic diagram is not generated by the Relu-Pooling or Relu-Conv combination layer, the decoding operation is not carried out;

(5) judging whether all the generated feature map codes are subjected to corresponding decoding operation in the backward transmission process, if so, finishing the memory optimization scheme, otherwise, returning to the step (4) to continue iteration;

(6) and deploying the memory optimization scheme into heterogeneous computing nodes according to the memory optimization scheme to obtain a pipeline parallel training memory optimization scheme aiming at the target network to be trained.

2. The method for optimizing the memory of the pipeline model based on the eigen map coding for parallel training as claimed in claim 1, wherein the execution time of the single-bit pipeline in the step (1) mainly refers to the sum of computation time of forward transfer and backward transfer.

3. The feature map coding-based pipeline model parallel training memory optimization method according to claim 1, wherein the specific process of coding the feature map generated by the Relu-Pooling combination layer in the step (2) is as follows:

storing Relu output characteristic diagram elements by using 1 bit in a Relu layer, wherein if the elements are positive, the elements are 1; if the element is negative, it is 0; and storing the maximum value element position mapping of the output characteristic diagram and the input characteristic diagram in the Pooling layer.

4. The feature map coding-based pipeline model parallel training memory optimization method according to claim 1, wherein the specific process of coding the feature map generated by the Relu-Conv combination layer in the step (2) is as follows:

using a sparse matrix compression method CSR to encode and store the sparse feature map; storing the characteristic diagram in an n-dimensional matrix, decomposing the n-dimensional matrix into a 2-dimensional matrix, and converting the 2-dimensional matrix into a CSR format; the CSR adopts three one-dimensional arrays to respectively record non-zero values in a 2-dimensional matrix, corresponding to column numbers and row offsets; the CSR is not a triple, but an integral coding mode; the value and column number indicate an element and an element column number, and the row offset indicates the starting offset position of the first element of a row in the value array.

5. The method for optimizing the memory for parallel training of the pipeline model based on the eigen map coding as claimed in claim 1, wherein the requirements of the combination layer for decoding the generated eigen map in step (4) are as follows:

(4.1) Relu-Pooling combination layer; in backward transfer calculation, 1-bit data is directly used for calculation in a Relu layer, and in backward transfer calculation, a characteristic diagram position mapping is used for calculation in a Pooling layer.

(4.2) Relu-Conv combination layer; the CSR format code is restored to the original data in a backward pass.

6. The feature map coding-based pipeline model parallel training memory optimization method according to claim 5, wherein the specific process of decoding the feature map generated by the Relu-Pooling combination layer in the step (4.1) is as follows:

(4.1.1) carrying out backward transfer calculation analysis on the Relu layer;

(4.1.2) computational analysis of Pooling layer back-transfer.

7. The feature map coding-based pipeline model parallel training memory optimization method according to claim 6, wherein the specific flow of backward transfer computational analysis of the Relu layer in the step (4.1.1) is as follows:

the Relu activation function calculation formula is as follows:

Relu(x)=max(0,x)

when the input is a negative value, the output is 0; when the input value is a positive value, the output result is unchanged; the unilateral inhibition enables the Relu layer to carry out backward transfer calculation, and only the output characteristic diagram of the layer and the output gradient of the next layer are needed; the backward propagation calculation formula of the Relu layer is as follows:

as can be seen from the Relu reverse transfer calculation formula, the Relu layer does not need to store the input feature map X with high accuracy all the time, and only when the corresponding element in Y is positive, the element of Y is transferred to dX, otherwise dX is set to 0; for this phenomenon, 1 bit is used in the Relu layer to replace the negative value element of the feature map, which indicates whether the element is positive or not, and redundant storage of the feature map is avoided.

8. The method for optimizing the memory for the parallel training of the pipeline model based on the eigen map coding as claimed in claim 6, wherein the specific process of the Pooling layer backward transfer computation analysis in the step (4.1.2) is as follows:

the DNN model carries out secondary sampling on the input matrix by using a maximum pooling method, wherein in the maximum pooling method, a window with a specified size slides on the input matrix X in a forward transfer mode, a maximum value is found in the window and is transferred to an output Y, the gradient in backward transfer calculation is propagated to a corresponding position of the maximum value, and the gradients at other positions are 0;

a mapping from Y to X is created in the Pooling layer forward pass to track these locations.

9. The feature map coding-based pipeline model parallel training memory optimization method according to claim 5, wherein the specific process of decoding the feature map generated by the Relu-Conv combination layer in the step (4.2) is as follows:

and converting the code in the CSR format into a 2-dimensional matrix, and restoring the 2-dimensional matrix into an n-dimensional matrix to form a data structure stored by the original DNN model, thereby realizing a series of subsequent operations.

Technical Field

The invention relates to a method for optimizing a parallel training memory of a pipeline model based on feature map coding, and belongs to the technical field of computers.

Background

The deep neural network is widely applied to various fields, and achieves a prediction effect exceeding that of human beings. With the requirements of the model on accuracy and the like becoming higher and higher, the model parameter scale and the calculation requirement become larger and larger, and the training of the model becomes a very calculation-intensive and time-consuming task. Researchers often use distributed computer clusters to accelerate the model training process. Distributed deep learning parallel training is directed to accelerating the DNN model training process and has been studied by many scholars. In which, pipeline parallel training research is more and more intensive. The problem of communication bottleneck of data parallel and the problem of computing resource waste of model parallel can be solved by the pipeline parallel training. In the pipeline parallel training system, a plurality of computing nodes execute training tasks of all batches in a pipeline mode, and the memory consumption is large. In order to solve the problem of high memory occupation of the model, technologies such as model pruning and quantization are proposed to compress the scale of the model parameters. However, most of the existing methods are based on reducing the scale of model parameters to reduce the memory occupation of the model, but are not suitable for the model training process and cannot solve the problem of high memory occupation in model training. The method is significant in researching the memory optimization method for reducing the memory occupation amount aiming at the problem of high memory occupation amount in the pipeline parallel training.

Disclosure of Invention

The purpose of the invention is as follows: in order to solve the problem of high memory occupation in the pipeline parallel training, the invention provides a pipeline model parallel training memory optimization method based on feature map coding.

The technical scheme is as follows: a method for optimizing a parallel training memory of a pipeline model based on feature map coding comprises the following steps:

(1) constructing a parallel training scheme of a pipeline DNN model, adopting an asynchronous parameter updating method, executing different batches of training in different nodes concurrently, and recording that each training batch completes the forward and backward transfer processes within the execution time of a unit pipeline;

(2) and after the forward transfer computing task is completed, generating a characteristic diagram. If the feature map is generated by the Relu-Pooling or Relu-Conv combination layer, encoding the feature map; if the feature map is not generated by the Relu-Pooling or Relu-Conv combination layer, the encoding operation is not carried out;

(3) judging whether the generated feature maps are all encoded and stored in a low memory occupation format, so that the memory occupation amount required by feature map storage is reduced, if so, encoding the feature maps is finished, otherwise, returning to the step (2) to continue iteration;

(4) and decoding the generated feature map when calculating in a backward transfer process. If the feature map is generated by the Relu-Pooling or Relu-Conv combination layer, decoding the feature map; if the characteristic diagram is not generated by the Relu-Pooling or Relu-Conv combination layer, the decoding operation is not carried out;

(5) judging whether all the generated feature map codes are subjected to corresponding decoding operation in the backward transmission process, if so, finishing the memory optimization scheme, otherwise, returning to the step (4) to continue iteration;

(6) and deploying the memory optimization scheme into heterogeneous computing nodes according to the memory optimization scheme to obtain a pipeline parallel training memory optimization scheme aiming at the target network to be trained.

Further, the single-bit pipeline execution time in the step (1) mainly refers to the sum of forward transfer and backward transfer calculation time.

Further, the specific process of encoding the feature map generated by the Relu-Pooling combination layer in step (2) is as follows:

storing Relu output characteristic diagram elements by using 1 bit in a Relu layer, wherein if the elements are positive, the elements are 1; if the element is negative, it is 0; and storing the maximum value element position mapping of the output characteristic diagram and the input characteristic diagram in the Pooling layer.

Further, the specific process of encoding the feature map generated by the Relu-Conv combination layer in step (2) is as follows:

and (4) encoding and storing the sparse feature map by using a sparse matrix compression method CSR. The signature graph is typically stored in an n-dimensional matrix that can be decomposed into 2-dimensional matrices, and these 2-dimensional matrices can be converted to CSR format. The CSR adopts three one-dimensional arrays to record non-zero values in the 2-dimensional matrix respectively, and the non-zero values correspond to column numbers and row offsets. CSR is not a triplet, but an overall coding scheme. The value and column number indicate an element and the element column number, and the row offset indicates the starting offset position of the first element of a row in the value array.

Further, the requirements of the combination layer for decoding the generated feature map in step (4) are as follows:

(4.1) Relu-Pooling combination layer. In backward transfer calculation, 1-bit data is directly used for calculation in a Relu layer, the storage memory occupation of negative value elements of an input feature diagram of the Relu layer is reduced, and in backward transfer calculation, the position mapping of the feature diagram is used for calculation in a Pooling layer, so that the storage memory occupation of redundant elements in the feature diagram is avoided.

(4.2) Relu-Conv combination layer. And the CSR format codes are restored into high-precision original data in backward transmission, so that the calculation accuracy is ensured, and the storage memory occupation of the high-sparse characteristic diagram is reduced.

Further, the specific process of decoding the feature map generated by the Relu-Pooling combination layer in step (4.1) is as follows:

(4.1.1) carrying out backward transfer calculation analysis on the Relu layer;

(4.1.2) computational analysis of Pooling layer back-transfer.

Further, the specific flow of backward transfer calculation analysis of the Relu layer in the step (4.1.1) is as follows:

the Relu activation function is used for increasing the nonlinearity of the network, relieving the overfitting problem of the neural network and avoiding the gradient disappearance problem. Compared with an activation function such as Sigmod, the method is simple in calculation and strong in model training convergence. Relu calculation formula is as follows:

Relu(x)=max(0,x)

when the input is a negative value, the output is 0; when the input value is a positive value, the output result is unchanged. This unilateral rejection allows the Relu layer to perform backward-propagation calculations, requiring only the output profile of that layer and the output gradient of the next layer. The backward propagation calculation formula of the Relu layer is as follows:

as can be seen from the Relu reverse transfer calculation formula, the Relu layer does not need to store the input feature map X with high accuracy all the time, and only when the corresponding element in Y is positive, the element of Y is transferred to dX, otherwise dX is set to 0; x is the input profile, Y is the output profile, dX is the back propagation gradient, and dY is the output gradient of the next layer. For this phenomenon, 1 bit can be used in the Relu layer to replace the negative value element of the feature map, indicating whether the element is positive or not, and avoiding redundant storage of the feature map.

Further, the specific flow of the Pooling layer backward transfer calculation analysis in the step (4.1.2) is as follows:

the DNN model generally uses a Max-Pooling method (Max-Pooling) to perform secondary sampling on an input matrix, main features of a feature map are reserved, parameters and calculation amount of a next layer are reduced, and an overfitting problem is prevented. In the maximum pooling method, a window with a specified size is slid on an input matrix X in the forward pass, the maximum value is found in the window and is passed to an output Y, the gradient is propagated to the corresponding position of the maximum value in the backward pass calculation, and the gradients at other positions are 0.

From the above analysis, the Pooling layer backward transfer does not require all the actual values output by the previous layer. These high precision format data can result in a high memory footprint. For this phenomenon, a mapping from Y to X is created in the Pooling layer forward pass to track these locations.

Further, the specific process of decoding the feature map generated by the Relu-Conv combination layer in step (4.2) is as follows:

and converting the code in the CSR format into a 2-dimensional matrix, and restoring the 2-dimensional matrix into an n-dimensional matrix to form a data structure stored by the original DNN model, thereby realizing a series of subsequent operations.

Has the advantages that: compared with the prior art, the invention has the following advantages:

aiming at the problems of high memory occupation and long service time distribution of the feature map in the DNN training process, the generated feature map is coded and stored in a low memory occupation format after a forward transfer calculation task is completed in the model training process, so that the memory occupation required by feature map storage is reduced; when the backward transfer process is calculated, the stored characteristic graph is decoded, the high-precision original data is restored, the influence of the low-precision data on model training calculation is avoided, and the effectiveness of model training is ensured.

Drawings

FIG. 1 is an exemplary graph of a feature graph usage lifecycle;

FIG. 2 is a flow chart of a method of an embodiment of the present invention;

FIG. 3 is a diagram illustrating two encoding schemes of a pipeline model parallel training memory optimization method based on eigen map encoding in an embodiment;

FIG. 4 is a diagram illustrating an example of storing a feature map based on binarization coding for a DNN model in an embodiment;

FIG. 5 is an exemplary diagram of storing computations for a CSR-based encoding of a feature map in an embodiment.

Detailed Description

The present invention is further illustrated by the following examples, which are intended to be purely exemplary and are not intended to limit the scope of the invention, as various equivalent modifications of the invention will occur to those skilled in the art upon reading the present disclosure and fall within the scope of the appended claims.

Aiming at the problem that the existing research work does not consider the memory occupation of the feature map in DNN model training, the method analyzes the use condition of the feature map, codes the generated feature map after the forward transfer calculation task is completed in the model training process, and stores the feature map in a low memory occupation format, thereby reducing the memory occupation required by feature map storage; when calculating the backward transfer process, the stored characteristic graph is decoded, high-precision original data is restored, and the effectiveness of model training is ensured.

FIG. 1 is an exemplary graph of a feature graph usage lifecycle.

The characteristic diagram X is composed of a layer L abovexIs calculated as LYThe layer input performs forward pass computations. L isYLayer back-pass computation continues using X. The feature map X is maintained with high accuracy (e.g., FP32) throughout the life cycle, occupying a major memory drain.

Fig. 2 is a flowchart of a method for optimizing a parallel training memory of a pipeline model based on feature map coding in this embodiment. The method for optimizing the parallel training memory of the pipeline model based on the characteristic diagram coding comprises the following steps:

step A: and constructing a parallel training scheme of the pipeline DNN model, adopting an asynchronous parameter updating method, executing different batches of training in different nodes concurrently, and recording that each training batch completes the forward and backward transfer processes in the unit pipeline execution time. The unit pipeline execution time mainly refers to the sum of forward transfer and backward transfer calculation time.

And B: and after the forward transfer computing task is completed, generating a characteristic diagram. If the feature map is generated by the Relu-Pooling or Relu-Conv combination layer, encoding the feature map; and if the feature map is not generated by the Relu-Pooling or Relu-Conv combination layer, the encoding operation is not carried out, and the step C is directly skipped.

Step B1: if the Relu-Pooling combination layer is adopted, the specific steps for coding the characteristic diagram generated by the Relu-Pooling combination layer are as follows:

storing information whether pixels of Relu output characteristic graphics are positive or not by using 1 bit in a Relu layer; and storing the maximum value element position mapping of the output characteristic diagram and the input characteristic diagram in the Pooling layer.

Step B2: and if the combined layer is a Relu-Conv combined layer, a sparse matrix compression method CSR is used for coding and storing the sparse feature map. The method comprises the following specific steps:

the signature graph is typically stored in an n-dimensional matrix that can be decomposed into 2-dimensional matrices, and these 2-dimensional matrices can be converted to CSR format. The CSR adopts three one-dimensional arrays to record non-zero values in the matrix respectively, and the non-zero values correspond to column numbers and row offsets. CSR is not a triplet, but an overall coding scheme. The value and column number indicate an element and its column number, and the row offset indicates the starting offset position of the first element of a row in the value array.

And C: and C, judging whether the generated feature maps are all coded and storing in a low memory occupation format, so as to reduce the memory occupation amount required by feature map storage, if so, finishing coding of the feature maps, and if not, returning to the step B to continue iteration.

Step D: and decoding the generated feature map when calculating in a backward transfer process. If the feature map is generated by the Relu-Pooling or Relu-Conv combination layer, decoding the feature map; and if the characteristic diagram is not generated by the Relu-Pooling or Relu-Conv combination layer, the decoding operation is not carried out, and the step E is directly skipped.

Step D1: if the composite layer is Relu-Pooling composite layer. In backward transfer calculation, 1-bit data is directly used for calculation in a Relu layer, the storage memory occupation of negative value elements of an input feature diagram of the Relu layer is reduced, and in backward transfer calculation, the position mapping of the feature diagram is used for calculation in a Pooling layer, so that the storage memory occupation of redundant elements in the feature diagram is avoided.

Step D1-1: the specific flow of backward transfer computational analysis of the Relu layer is as follows:

the Relu activation function is used for increasing the nonlinearity of the network, relieving the overfitting problem of the neural network and avoiding the gradient disappearance problem. Compared with an activation function such as Sigmod, the method is simple in calculation and strong in model training convergence. Relu calculation formula is as follows:

Relu(x)=max(0,x)

when the input is a negative value, the output is 0; when the input value is a positive value, the output result is unchanged. This unilateral rejection allows the Relu layer to perform backward-propagation calculations, requiring only the output profile of that layer and the output gradient of the next layer. The backward propagation calculation formula of the Relu layer is as follows:

from the Relu reverse transfer calculation formula, it can be seen that the Relu layer does not need to keep the input feature map X at all times with high precision, and only if the corresponding element in Y is positive, the element of Y is transferred to dX, otherwise dX is set to 0. For this phenomenon, 1 bit can be used in the Relu layer to replace the negative value element of the feature map, indicating whether the element is positive or not, and avoiding redundant storage of the feature map.

Step D1-2: the specific flow of backward transfer computational analysis for Pooling layers is as follows:

the DNN model generally uses a Max-Pooling method (Max-Pooling) to perform secondary sampling on an input matrix, main features of a feature map are reserved, parameters and calculation amount of a next layer are reduced, and an overfitting problem is prevented. In the maximum pooling method, a window with a specified size is slid on an input matrix X in the forward pass, the maximum value is found in the window and is passed to an output Y, the gradient is propagated to the corresponding position of the maximum value in the backward pass calculation, and the gradients at other positions are 0.

From the above analysis, the Pooling layer backward transfer does not require all the actual values output by the previous layer. These high precision format data can result in a high memory footprint. For this phenomenon, a mapping from Y to X is created in the Pooling layer forward pass to track these positions (where the maximum is found in the window).

Step D2: if the layer is a Relu-Conv combination layer. And the CSR format codes are restored into high-precision original data in backward transmission, so that the calculation accuracy is ensured, and the storage memory occupation of the high-sparse characteristic diagram is reduced. And converting the code in the CSR format into a 2-dimensional matrix, and restoring the 2-dimensional matrix into an n-dimensional matrix to form a data structure stored by the original DNN model, thereby realizing a series of subsequent operations.

Step E: and D, judging whether all the generated feature map codes are subjected to corresponding decoding operation in the backward transmission process, if so, finishing the memory optimization scheme, and otherwise, returning to the step D to continue iteration.

Step F: and deploying the memory optimization scheme into heterogeneous computing nodes according to the memory optimization scheme to obtain a pipeline parallel training memory optimization scheme aiming at the target network to be trained.

FIG. 3 is an exemplary diagram of two encoding schemes of a pipeline model parallel training memory optimization method based on eigen map encoding.

(1) And (3) binarization encoding: for Relu-Pooling combination, 1 bit is used for storing information whether a Relu output characteristic graphic primitive element is positive or not in a Relu layer, 1 bit is directly used for calculation in backward transfer calculation, and the storage memory occupation of a negative value element of a Relu input characteristic diagram is reduced; and storing the maximum value element position mapping of the output characteristic diagram and the input characteristic diagram in the Pooling layer, and calculating by using the characteristic diagram position mapping in backward transfer calculation to avoid the storage memory occupation of redundant elements in the characteristic diagram.

(2) CSR coding: for the ReLU-Conv combination, a sparse matrix compression method CSR is used for coding and storing the sparse feature graph, and CSR format codes are restored into high-precision original data in backward transmission, so that the calculation accuracy is ensured, and the storage memory occupation of the high-sparse feature graph is reduced.

The two coding schemes are respectively applied to each Relu-Pooling and Relu-Conv combination in the pipeline parallel training, so that the storage consumption of the characteristic diagram in the using interval of the life cycle of the characteristic diagram can be effectively reduced, and the memory occupation of DNN model training is reduced.

FIG. 4 is a diagram illustrating an example of storing a feature map based on binarization coding for a DNN model in an embodiment;

(a) is the DNN layer computes the back-propagation gradient using dX ═ f (X, Y, dY); (b) when the Relu layer carries out backward transfer calculation, only the output characteristic diagram of the layer and the output gradient of the next layer are needed; (c) it is the Pooling layer that uses this mapping in the backward pass computation, thus eliminating the dependency on the input and output feature maps of this layer; (d) the backward pass calculation calculates the gradient value of the input feature map X of the layer and the input gradient dY value of the next layer, that is, dX is f (X, dY).

FIG. 5 is an exemplary diagram of storing computations for a CSR-based encoding of a feature map in an embodiment.

In fig. 5, the first row element 1 is 0 offset, the second row element 3 is 2 offset, the third row element 4 is 3 offset, and the 4 th row element 1 is 4 offset. The final row offset is then complemented by the total number of elements in the matrix, which in this example is 5.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:字符生成模型的训练方法、字符生成方法、装置和设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!