Method for obtaining matrix decomposition time based on machine learning training model

文档序号:1242906 发布日期:2020-08-18 浏览:25次 中文

阅读说明:本技术 一种基于机器学习训练模型获取矩阵分解时间的方法 (Method for obtaining matrix decomposition time based on machine learning training model ) 是由 田小康 程明厚 阳杰 周振亚 刘强 于 2020-05-08 设计创作,主要内容包括:一种基于机器学习训练模型获取矩阵分解时间的方法,包括以下步骤:1)获取电路矩阵数据集;2)根据所述数据集进行矩阵分解时间模型训练;3)根据训练得到的矩阵分解时间模型,预测未知电路的矩阵分解时间。本发明的基于机器学习训练模型获取矩阵分解时间的方法,能够预估出矩阵分解时间,合理分配计算资源,缩短电路仿真时间,提升电路仿真效率。(A method for obtaining matrix decomposition time based on a machine learning training model comprises the following steps: 1) acquiring a circuit matrix data set; 2) performing matrix decomposition time model training according to the data set; 3) and predicting the matrix decomposition time of the unknown circuit according to the matrix decomposition time model obtained by training. The method for obtaining the matrix decomposition time based on the machine learning training model can estimate the matrix decomposition time, reasonably distribute computing resources, shorten the circuit simulation time and improve the circuit simulation efficiency.)

1. A method for obtaining matrix decomposition time based on a machine learning training model is characterized by comprising the following steps:

1) acquiring a circuit matrix data set;

2) performing matrix decomposition time model training according to the data set;

3) and predicting the matrix decomposition time of the unknown circuit according to the matrix decomposition time model obtained by training.

2. The method for obtaining matrix decomposition time based on machine learning training model according to claim 1, wherein the step 1) further comprises,

acquiring dimension, non-zero elements and matrix decomposition calculation frequency data of a circuit matrix;

removing outliers and duplicates from the data.

3. The method for obtaining matrix decomposition time based on machine learning training model according to claim 1, wherein the step 2) further comprises,

importing the data set into a sample space of a machine learning model;

and training according to a regression algorithm of machine learning to obtain a matrix decomposition time model.

4. The method of claim 3, wherein the step of training a regression algorithm based on machine learning to obtain a matrix factorization time model further comprises retraining the matrix factorization time model by replacing the machine learning model or by modifying model parameters.

5. The method of claim 4, further comprising evaluating the trained matrix decomposition time model according to a regression model evaluation index, recording the evaluation performance and prediction results of the matrix decomposition time model, and selecting an optimal matrix decomposition time model according to the evaluation.

6. The method of claim 5, wherein the regression model evaluation index comprises an evaluation absolute error and an evaluation mean square error.

7. The method of claim 6, wherein the matrix decomposition time model fitting degree is higher as the evaluation absolute error and the evaluation mean square error are smaller.

8. The method for obtaining matrix decomposition time based on machine learning training model according to claim 1, wherein the step 3) further comprises,

embedding the matrix decomposition time model into a circuit simulation;

acquiring matrix dimension, non-zero elements and matrix decomposition calculation frequency data of the unknown circuit;

calling the matrix decomposition time model and inputting the unknown circuit data;

outputting the predicted unknown circuit matrix decomposition time.

9. A computer-readable storage medium, on which a computer program is stored, which, when running, performs the method steps of obtaining a matrix decomposition time based on a machine learning trained model according to any of claims 1 to 8.

10. An apparatus for obtaining matrix decomposition time based on machine learning training model, comprising a memory and a processor, wherein the memory stores a computer program running on the processor, and the processor executes the computer program to perform the method steps of obtaining matrix decomposition time based on machine learning training model according to any one of claims 1 to 8.

Technical Field

The invention relates to the technical field of EDA circuit simulation, in particular to a method for predicting circuit matrix decomposition time in circuit simulation.

Background

With the increasing integration level of Integrated circuits, the Circuit complexity is increasing day by day, and in the existing Computer Aided Design (Integrated Circuit/Computer Aided Design) of Integrated circuits, a general Circuit simulation program is adopted for analysis, and for very large scale Integrated circuits, a large number of machines are required to be consumed, especially, when transient analysis is carried out, multiple iterations are required to decompose a Circuit matrix, so that the calculation amount of a simulation tool is also increased explosively. The design cycle of designers is also seriously influenced by the problems of too long simulation time and the like due to overlarge calculation amount and unbalanced calculation resource distribution faced by the current very large scale integrated circuit.

Since the circuit matrix decomposition needs to iterate for many times, a large amount of computing resources need to be consumed, but before the circuit simulation, it is not possible to know how many computing resources are needed at all, which often causes unbalanced distribution of the computing resources, and causes overlong simulation time and waste of the computing resources.

Disclosure of Invention

In order to solve the defects in the prior art, the invention aims to provide a method for obtaining matrix decomposition time based on a machine learning training model, which utilizes a matrix decomposition time model obtained by training to estimate the matrix decomposition time, reasonably distributes computing resources, shortens circuit simulation time and improves circuit simulation efficiency.

In order to achieve the above object, the method for obtaining matrix decomposition time based on a machine learning training model provided by the invention comprises the following steps:

1) acquiring a circuit matrix data set;

2) performing matrix decomposition time model training according to the data set;

3) and predicting the matrix decomposition time of the unknown circuit according to the matrix decomposition time model obtained by training.

Further, the step 1) further comprises,

acquiring dimension, non-zero elements and matrix decomposition calculation frequency data of a circuit matrix;

removing outliers and duplicates from the data.

Further, the step 2) further comprises,

importing the data set into a sample space of a machine learning model;

and training according to a regression algorithm of machine learning to obtain a matrix decomposition time model.

Further, the step of training to obtain the matrix decomposition time model according to the regression algorithm of machine learning further comprises replacing the machine learning model or changing the model parameters, and retraining the matrix decomposition time model.

And further, evaluating the trained matrix decomposition time model according to a regression model evaluation index, recording the evaluation performance and the prediction result of the matrix decomposition time model, and selecting an optimal matrix decomposition time model according to the evaluation.

Further, the regression model evaluation index includes an evaluation absolute error and an evaluation mean square error.

Further, the smaller the numerical values of the evaluation absolute error and the evaluation mean square error, the higher the degree of fitting of the matrix decomposition time model.

Further, the step 3) further comprises,

embedding the matrix decomposition time model into a circuit simulation;

acquiring matrix dimension, non-zero elements and matrix decomposition calculation frequency data of the unknown circuit;

calling the matrix decomposition time model and inputting the unknown circuit data;

outputting the predicted unknown circuit matrix decomposition time.

To achieve the above object, the present invention further provides a computer-readable storage medium having stored thereon a computer program which, when being executed, performs the method steps of obtaining matrix decomposition time based on a machine learning training model as described above.

In order to achieve the above object, the present invention further provides an apparatus for obtaining matrix decomposition time based on machine learning training model, including a memory and a processor, where the memory stores a computer program running on the processor, and the processor executes the computer program to perform the above method steps for obtaining matrix decomposition time based on machine learning training model.

The method for obtaining the matrix decomposition time based on the machine learning training model has the following beneficial effects:

1) an index for measuring the complexity of a circuit matrix is provided to predict the complexity of the circuit.

2) The matrix decomposition time is estimated, computing resources are further reasonably distributed, the circuit simulation time is shortened, and the circuit simulation efficiency is improved.

Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.

Drawings

The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:

FIG. 1 is a flow chart of a method for obtaining matrix decomposition time based on a machine learning training model according to the present invention.

Detailed Description

The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.

Fig. 1 is a flowchart of a method for obtaining matrix decomposition time based on a machine learning training model according to the present invention, and the method for obtaining matrix decomposition time based on a machine learning training model according to the present invention will be described in detail with reference to fig. 1.

First, in step 101, circuit data is measured.

In the embodiment of the invention, the circuit matrix is measured, and data such as the dimension, the non-zero element, the matrix calculation frequency and the like of the circuit matrix are obtained.

At step 102, the measurement data is processed to obtain a data set.

In the embodiment of the invention, the repeated value and the abnormal value of the measured data are removed, and the data set is obtained through sorting.

In step 103, machine learning model training is performed on the data set.

In the embodiment of the invention, the training samples are led into the sample space of the machine learning model, and the matrix decomposition time is obtained through the regression algorithm training of the machine learning.

In step 104, the training model is evaluated, and the evaluation performance and the prediction result of the model are recorded.

In the embodiment of the invention, the model obtained by training is evaluated, and the evaluation performance and the prediction result of the current model are recorded for finally selecting the optimal model.

In step 105, judging whether a replaceable model or a replaceable model parameter exists, if so, replacing the machine learning model or changing the model parameter, and jumping to step 103 to retrain the model; if not, go to step 106. In this step, after various common regression models are used and model parameters are adjusted for each model, and there are no more selectable models and model parameters, step 106 may be performed.

In step 106, the model training is ended and the optimal model is saved.

In the embodiment of the invention, the indexes are evaluated through a regression model: and evaluating the optimal model according to the numerical values such as MAE (evaluation absolute error), MSE (evaluation mean square error) and the like. In this step, the smaller the value of MAE (evaluation absolute error), MSE (evaluation mean square error), etc., the better the fitting effect.

In the embodiment of the invention, the model training is finished, the obtained optimal model is stored, and the model is directly called in the prediction process.

In the embodiment of the invention, the time required for decomposing the unknown circuit matrix is predicted by utilizing the matrix decomposition time model obtained by training. In the step, an optimal model is embedded into circuit simulation software (ALPS), in the circuit simulation process, characteristic data such as circuit matrix dimension, non-zero elements, matrix decomposition times and the like are stored into a certain data format, the data is used as the input of the model, the model is called, the time required for decomposition of the circuit matrix predicted by the model can be obtained, and the predicted decomposition time is used as the output of the model.

The invention provides a method for obtaining matrix decomposition time based on a machine learning training model, which is used for solving the problems that in the prior art, due to the fact that circuit matrix decomposition needs to be iterated for multiple times, a large amount of computing resources need to be consumed, but before circuit simulation, the number of needed computing resources cannot be known, the computing resources are often unbalanced in distribution, and the simulation time is too long and the computing resources are wasted. The method for obtaining the matrix decomposition time based on the machine learning training model is provided, so that the matrix decomposition time is estimated, computing resources are further reasonably distributed, the circuit simulation time is shortened, and the circuit simulation efficiency is improved.

To achieve the above object, the present invention further provides a computer-readable storage medium having stored thereon a computer program which, when being executed, performs the method steps of obtaining matrix decomposition time based on a machine learning training model as described above.

In order to achieve the above object, the present invention further provides an apparatus for obtaining matrix decomposition time based on machine learning training model, including a memory and a processor, where the memory stores a computer program running on the processor, and the processor executes the computer program to perform the above method steps for obtaining matrix decomposition time based on machine learning training model.

Those of ordinary skill in the art will understand that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

6页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种多标记的文本类数据特征选择方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类