IGBT fault prediction method and system

文档序号:1814618 发布日期:2021-11-09 浏览:8次 中文

阅读说明:本技术 Igbt故障预测方法和系统 (IGBT fault prediction method and system ) 是由 陈雯柏 蒋闯 于 2021-08-11 设计创作,主要内容包括:本发明涉及一种IGBT故障预测方法和系统,方法包括:对集电极-发射极电压、集电极-发射极电流、门极-发射极电压和门极电压进行数据分析筛选,并将得到的待预测参数输入到利用LSTM网络、Attention网络、CNN网络和GRU网络创建的故障预测模型中,得到目标预测结果;对比目标预测结果与IGBT实际运行过程中产生的实际数据,确定差异值;若差异值大于最大误差值,将当前误差超出次数加一;若当前误差超出次数大于次数阈值,生成故障报警信息,以使工作人员根据故障报警信息对IGBT进行检修。本方案利用多个参数预测,避免了预测参数的单一性,利用LSTM网络、Attention网络、CNN网络和GRU网络创建的故障预测模型精度高,提高了IGBT故障预测的准确度。(The invention relates to an IGBT fault prediction method and system, wherein the method comprises the following steps: carrying out data analysis and screening on collector-emitter voltage, collector-emitter current, gate-emitter voltage and gate voltage, and inputting the obtained parameters to be predicted into a fault prediction model established by using an LSTM network, an Attention network, a CNN network and a GRU network to obtain a target prediction result; comparing the target prediction result with actual data generated in the actual operation process of the IGBT, and determining a difference value; if the difference value is larger than the maximum error value, adding one to the number of times of exceeding the current error; and if the number of times of the current error exceeding is larger than the number threshold, generating fault alarm information so that a worker can overhaul the IGBT according to the fault alarm information. According to the scheme, the plurality of parameters are used for prediction, the singleness of prediction parameters is avoided, the fault prediction model created by the LSTM network, the Attention network, the CNN network and the GRU network is high in precision, and the accuracy of IGBT fault prediction is improved.)

1. An IGBT fault prediction method is characterized by comprising the following steps:

acquiring pre-collected collector-emitter voltage, collector-emitter current, gate-emitter voltage and gate voltage;

carrying out data analysis and screening on the collector-emitter voltage, the collector-emitter current, the gate-emitter voltage and the gate voltage to obtain parameters to be predicted of the IGBT;

inputting the parameters to be predicted into a fault prediction model pre-established by using an LSTM network, an Attention network, a CNN network and a GRU network to obtain an output target prediction result;

acquiring actual data corresponding to the parameter to be predicted, which is generated in the actual operation process of the IGBT;

comparing the target prediction result with the actual data, and determining a difference value between the target prediction result and the actual data;

judging whether the difference value is larger than a preset maximum error value or not;

if the difference value is larger than the maximum error value, adding one to the pre-recorded current error exceeding times, and judging whether the current error exceeding times exceeds a preset time threshold value or not;

and if the number of times of the current error exceeding exceeds the number threshold, generating fault alarm information, and sending the fault alarm information to a management terminal so that a worker can overhaul the IGBT according to the fault alarm information.

2. The IGBT fault prediction method according to claim 1, wherein the creating process of the fault prediction model comprises:

acquiring a training parameter set from pre-stored historical data; the training parameter set comprises: a plurality of groups of model training parameters and each group of model training parameters correspond to training actual output data;

establishing an LACNN model comprising an LSTM layer, an Attention layer, a CNN layer, a conditioner layer and a GRU layer by utilizing an LSTM network, an Attention network, a CNN network and a GRU network;

inputting model training parameters extracted from a training parameter set into the LACNN model, processing the model training parameters by using the LACNN model to obtain a training prediction result output by the LACNN model, and adding one to the pre-recorded current training iteration times;

calculating a training error between the training prediction result and training actual output data corresponding to the model training parameters;

adjusting and updating the LACNN model according to the training error to obtain an updated LACNN model;

judging whether the current training iteration number reaches a preset training number and whether the training error meets a preset training error range;

if the current training iteration number reaches the preset training number or the training error meets a preset training error range, taking the updated LACNN model as the fault prediction model;

and if the current training iteration times do not reach the preset training times and the training error does not meet the preset training error range, continuing to extract model training parameters from the training parameter set, and training the updated LACNN model.

3. The method according to claim 2, wherein the step of inputting model training parameters extracted from a training parameter set into the LACNN model, and processing the model training parameters by using the LACNN model to obtain a training prediction result output by the LACNN model comprises:

inputting the model training parameters extracted from the training parameter set into an LSTM layer in the LACNN model, so that the LSTM layer learns the model training parameters and outputs first output characteristic information;

inputting the first output characteristic information into an Attention layer in the LACNN model, so that the Attention layer processes the first output characteristic information by using an Attention mechanism and outputs second output characteristic information;

inputting the second output characteristic information into a CNN layer in the LACNN model, so that the CNN layer performs convolution operation on the second output characteristic information and outputs third output characteristic information;

inputting the second output characteristic information and the third output characteristic information into a configuration layer in the LACNN model, so that the configuration layer integrates the second output characteristic information and the third output characteristic information and outputs fourth output characteristic information;

and inputting the fourth output characteristic information into a GRU layer in the LACNN model, so that the GRU layer processes the fourth output characteristic information and outputs a training prediction result.

4. The IGBT fault prediction method according to claim 1, wherein the fault prediction model comprises: LSTM, Attention, CNN, Concatenate, and GRU layers.

5. An IGBT fault prediction system, comprising: the system comprises a processor and a memory connected with the processor;

the memory for storing a computer program for performing at least the IGBT failure prediction method of any one of claims 1-4;

the processor is used for calling and executing the computer program.

Technical Field

The invention relates to the technical field of power electronic equipment, in particular to an IGBT fault prediction method and system.

Background

An insulated Gate Bipolar transistor (igbt) is a representative product of the third revolution of the internationally recognized power electronic technology, is called a CPU for energy conversion, is used as a core component in the fields of industrial control and automation, and is widely applied to the fields of modern energy, aerospace, rail transit, communication and the like. In these fields, the IGBT is often affected by various stress factors such as thermal stress, mechanical stress, and electrical stress under severe operating conditions such as high voltage, radiation, and large current, and is prone to malfunction. Their failure may cause the entire system to crash or shut down suddenly, etc., with long maintenance time and high cost. Therefore, the method has important significance for IGBT fault prediction in order to ensure safe and reliable operation of equipment.

In the prior art, methods for predicting the IGBT fault mainly comprise a physical model-based method and a data driving-based method, wherein the physical model-based prediction method depends on extensive knowledge of the IGBT physics and a failure mechanism. And complicated layer and bonding connection exist in the IGBT structure, excessive physical parameters are involved, and the internal degradation mechanism of the IGBT is difficult to describe by establishing a model by a mathematical method. In contrast, the data driving method only needs to measure relevant data to reflect the degradation behavior of the IGBT, and the data usually comes from monitored parameters of the components, such as voltage, current, power, temperature, and the like, without deep knowledge of the specific device principle.

The data-driven-based method mainly has two aspects of discussion, namely selection of IGBT parameters and construction of a prediction model. In the prior art, the selection of parameters mainly surrounds single electric signals such as junction temperature and collector-emitter voltage. Although the junction temperature can well reflect the actual degradation process of the IGBT, the junction temperature has the defect that the device is damaged because the device package is required to be opened or the insulating sealing glue is required to be removed, and the junction temperature cannot be used for measuring the normally used IGBT device in actual engineering and can only be used for scientific research. The degradation trend of the single electric signal such as collector-emitter voltage cannot be well predicted by only using the single electric signal, and the fault prediction accuracy is reduced. In the aspect of model construction, an LSTM network is generally adopted in the prior art, but deeper characteristic information in data cannot be comprehensively learned only by using the LSTM, and model information is not comprehensively extracted, so that the model prediction accuracy is low.

Therefore, how to reduce the singularity of selecting the IGBT fault prediction parameters and improve the prediction accuracy of the prediction model, thereby improving the accuracy of the IGBT fault prediction is a technical problem that needs to be solved by those skilled in the art.

Disclosure of Invention

In view of this, the present invention aims to provide a method and a system for predicting an IGBT fault, so as to solve the problems in the prior art that the accuracy of IGBT fault prediction is low due to single selection of IGBT fault prediction parameters and low prediction accuracy of a prediction model.

In order to achieve the purpose, the invention adopts the following technical scheme:

an IGBT fault prediction method comprises the following steps:

acquiring pre-collected collector-emitter voltage, collector-emitter current, gate-emitter voltage and gate voltage;

carrying out data analysis and screening on the collector-emitter voltage, the collector-emitter current, the gate-emitter voltage and the gate voltage to obtain parameters to be predicted of the IGBT;

inputting the parameters to be predicted into a fault prediction model pre-established by using an LSTM network, an Attention network, a CNN network and a GRU network to obtain an output target prediction result;

acquiring actual data corresponding to the parameter to be predicted, which is generated in the actual operation process of the IGBT;

comparing the target prediction result with the actual data, and determining a difference value between the target prediction result and the actual data;

judging whether the difference value is larger than a preset maximum error value or not;

if the difference value is larger than the maximum error value, adding one to the pre-recorded current error exceeding times, and judging whether the current error exceeding times exceeds a preset time threshold value or not;

and if the number of times of the current error exceeding exceeds the number threshold, generating fault alarm information, and sending the fault alarm information to a management terminal so that a worker can overhaul the IGBT according to the fault alarm information.

Further, in the above IGBT fault prediction method, the creating process of the fault prediction model includes:

acquiring a training parameter set from pre-stored historical data; the training parameter set comprises: a plurality of groups of model training parameters and each group of model training parameters correspond to training actual output data;

establishing an LACNN model comprising an LSTM layer, an Attention layer, a CNN layer, a conditioner layer and a GRU layer by utilizing an LSTM network, an Attention network, a CNN network and a GRU network;

inputting model training parameters extracted from a training parameter set into the LACNN model, processing the model training parameters by using the LACNN model to obtain a training prediction result output by the LACNN model, and adding one to the pre-recorded current training iteration times;

calculating a training error between the training prediction result and training actual output data corresponding to the model training parameters;

adjusting and updating the LACNN model according to the training error to obtain an updated LACNN model;

judging whether the current training iteration number reaches a preset training number and whether the training error meets a preset training error range;

if the current training iteration number reaches the preset training number or the training error meets a preset training error range, taking the updated LACNN model as the fault prediction model;

and if the current training iteration times do not reach the preset training times and the training error does not meet the preset training error range, continuing to extract model training parameters from the training parameter set, and training the updated LACNN model.

Further, in the above method for predicting an IGBT fault, the inputting the model training parameters extracted from the training parameter set into the LACNN model, and processing the model training parameters by using the LACNN model to obtain the training prediction result output by the LACNN model includes:

inputting the model training parameters extracted from the training parameter set into an LSTM layer in the LACNN model, so that the LSTM layer learns the model training parameters and outputs first output characteristic information;

inputting the first output characteristic information into an Attention layer in the LACNN model, so that the Attention layer processes the first output characteristic information by using an Attention mechanism and outputs second output characteristic information;

inputting the second output characteristic information into a CNN layer in the LACNN model, so that the CNN layer performs convolution operation on the second output characteristic information and outputs third output characteristic information;

inputting the second output characteristic information and the third output characteristic information into a configuration layer in the LACNN model, so that the configuration layer integrates the second output characteristic information and the third output characteristic information and outputs fourth output characteristic information;

and inputting the fourth output characteristic information into a GRU layer in the LACNN model, so that the GRU layer processes the fourth output characteristic information and outputs a training prediction result.

Further, in the above IGBT fault prediction method, the fault prediction model includes: LSTM, Attention, CNN, Concatenate, and GRU layers.

The invention also provides an IGBT fault prediction system, which comprises: the system comprises a processor and a memory connected with the processor;

the memory is used for storing a computer program, and the computer program is at least used for executing the IGBT fault prediction method;

the processor is used for calling and executing the computer program.

An IGBT fault prediction method and system, the method comprising: acquiring pre-collected collector-emitter voltage, collector-emitter current, gate-emitter voltage and gate voltage; carrying out data analysis and screening on the collector-emitter voltage, the collector-emitter current, the gate-emitter voltage and the gate voltage to obtain parameters to be predicted of the IGBT; inputting parameters to be predicted into a fault prediction model pre-established by using an LSTM network, an Attention network, a CNN network and a GRU network to obtain an output target prediction result; acquiring actual data corresponding to the parameters to be predicted, which are generated in the actual operation process of the IGBT; comparing the target prediction result with the actual data, and determining a difference value between the target prediction result and the actual data; judging whether the difference value is larger than a preset maximum error value or not; if the difference value is larger than the maximum error value, adding one to the pre-recorded current error exceeding times, and judging whether the current error exceeding times exceeds a preset time threshold value or not; and if the current error exceeds the frequency exceeding frequency threshold, generating fault alarm information, and sending the fault alarm information to the management terminal so that a worker can overhaul the IGBT according to the fault alarm information. By adopting the technical scheme of the invention, the collector-emitter voltage, the collector-emitter current, the gate-emitter voltage and the gate voltage are taken as the parameters of the fault prediction, the singleness of the prediction parameters is avoided, and compared with the fault prediction model established by using the LSTM network, the Attention network, the CNN network and the GRU network in the prior art, the fault prediction model can improve the comprehensiveness of information extraction, so that the precision of the fault prediction model is improved, and the accuracy of the IGBT fault prediction is improved.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.

Fig. 1 is a flow chart provided by an embodiment of an IGBT fault prediction method of the present invention;

FIG. 2 is a flow diagram of the creation of the fault prediction model of FIG. 1;

fig. 3 is a schematic structural diagram provided by an embodiment of the IGBT fault prediction system of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.

Fig. 1 is a flowchart provided by an embodiment of an IGBT fault prediction method according to the present invention, and as shown in fig. 1, the IGBT fault prediction method according to the present embodiment specifically includes the following steps:

s101, acquiring pre-collected collector-emitter voltage, collector-emitter current, gate-emitter voltage and gate voltage.

In this embodiment, it is necessary to obtain a collector-emitter voltage, a collector-emitter current, a gate-emitter voltage, and a gate voltage of the IGBT acquired in advance. Therefore, the parameters can be used as parameters for predicting the faults of the IGBT, different parameters contain different aging information, and the change trends among the parameters are also influenced in a correlation mode. Therefore, the problem of single prediction parameter in the prior art can be avoided, and the accuracy of IGBT fault prediction can be improved.

S102, carrying out data analysis and screening on the collector-emitter voltage, the collector-emitter current, the gate-emitter voltage and the gate voltage to obtain the parameters to be predicted of the IGBT.

After the parameters are obtained, data analysis and screening are further required to be performed on the collector-emitter voltage, the collector-emitter current, the gate-emitter voltage and the gate voltage, in this embodiment, it is preferable to perform data analysis and screening on each parameter by using MATLAB, so that overlapping and invalid information in a plurality of parameters is eliminated, and valid information is extracted and fused to obtain the to-be-predicted parameters of the IGBT. The parameter to be predicted is a parameter to be predicted, and represents a parameter before the prediction model is input for prediction, namely, a parameter to be processed.

S103, inputting the parameters to be predicted into a fault prediction model which is pre-established by using an LSTM network, an Attention network, a CNN network and a GRU network to obtain an output target prediction result.

The embodiment creates a failure prediction model in advance, and the failure prediction model is created by using an LSTM network, an Attention network, a CNN network and a GRU network, and thus includes an LSTM layer, an Attention layer, a CNN layer, a conditioner layer and a GRU layer. In this embodiment, the parameters to be predicted obtained in the above steps need to be input into a fault prediction model, and the fault prediction model processes the parameters to be predicted and outputs a target prediction result. Because the fault prediction model in the embodiment is created by using the LSTM network, the Attention network, the CNN network and the GRU network, compared with the fault prediction by using the traditional LSTM network model in the prior art, the fault prediction model created in the embodiment extracts information in a mode of combining the LSTM network sensitive to time series information and the CNN network sensitive to space information, so that the accuracy of IGBT fault prediction can be improved.

And S104, acquiring actual data corresponding to the parameters to be predicted, which are generated in the actual operation process of the IGBT.

The embodiment also needs to acquire actual data corresponding to the parameters to be predicted, which are generated in the actual operation process of the IGBT. For example, in the present embodiment, the target prediction result is the peak voltage at the turn-off instant of the electrode-emitter of the failure parameter set, and the corresponding actual data is the data generated by the IGBT during the actual operation process, i.e., the peak voltage at the turn-off instant of the collector-emitter.

And S105, comparing the target prediction result with the actual data, and determining a difference value between the target prediction result and the actual data.

After the target prediction result and the actual data are determined, the target prediction result needs to be compared with the actual data, so that a difference value between the target prediction result and the actual data is determined.

And S106, judging whether the difference value is larger than a preset maximum error value. If so, step S107 is executed, otherwise, step S101 is continued.

And comparing the difference value between the target prediction result and the actual data with a preset maximum error value, and judging whether the difference value is greater than the maximum error value. If the difference value is greater than the maximum error value, executing step S107; if the difference value is not larger than the maximum error value, it indicates that the IGBT does not have a fault sign currently, and step S101 is continuously executed. In addition, the parameters to be predicted in the prediction process can be used as model training parameters, and the actual data can be used as corresponding training actual output data to be stored in historical data, so that the fault prediction model can be trained later, and the accuracy of the fault prediction model can be improved.

And S107, adding one to the pre-recorded current error exceeding times, and judging whether the current error exceeding times exceeds a preset time threshold value. If so, step S108 is executed, otherwise, step S101 is continuously executed.

In this embodiment, the current error exceeding times are recorded in advance, a time threshold is set in advance, if it is determined that the difference value between the target prediction result and the actual data is greater than the maximum error value, the current error exceeding times is increased by one, the current error exceeding times and the time threshold are compared, whether the current error exceeding times is greater than the time threshold is determined, if the current error exceeding times is greater than the time threshold, step S108 is executed, and if the current error exceeding times is not greater than the time threshold, step S101 is continuously executed. In this embodiment, if the threshold of the number of times is n-1, then n is selected according to the minimum dissimilarity parameter of the IGBT actually used, and in this embodiment, the size of one window in the fault prediction model is preferably used as the value of n.

And S108, generating fault alarm information and sending the fault alarm information to the management terminal.

If the current error exceeding times is judged to be larger than the time threshold value, the operating state of the IGBT can be judged to be unstable, the IGBT needs to be maintained, fault alarm information needs to be generated and sent to a management terminal applied by a worker, and the worker timely maintains the IGBT according to the fault alarm information after the management terminal receives the fault alarm information.

The method for predicting the IGBT fault of the embodiment acquires pre-collected collector-emitter voltage, collector-emitter current, gate-emitter voltage and gate voltage; carrying out data analysis and screening on the collector-emitter voltage, the collector-emitter current, the gate-emitter voltage and the gate voltage to obtain parameters to be predicted of the IGBT; inputting parameters to be predicted into a fault prediction model pre-established by using an LSTM network, an Attention network, a CNN network and a GRU network to obtain an output target prediction result; acquiring actual data corresponding to the parameters to be predicted, which are generated in the actual operation process of the IGBT; comparing the target prediction result with the actual data, and determining a difference value between the target prediction result and the actual data; judging whether the difference value is larger than a preset maximum error value or not; if the difference value is larger than the maximum error value, adding one to the pre-recorded current error exceeding times, and judging whether the current error exceeding times exceeds a preset time threshold value or not; and if the number of times of the current error exceeding is larger than the number threshold, generating fault alarm information, and sending the fault alarm information to the management terminal so that a worker can overhaul the IGBT according to the fault alarm information. By adopting the technical scheme of the invention, the collector-emitter voltage, the collector-emitter current, the gate-emitter voltage and the gate voltage are taken as the parameters of the fault prediction, the singleness of the prediction parameters is avoided, and compared with the fault prediction model established by using the LSTM network, the Attention network, the CNN network and the GRU network in the prior art, the fault prediction model can improve the comprehensiveness of information extraction, so that the precision of the fault prediction model is improved, and the accuracy of the IGBT fault prediction is improved.

Further, fig. 2 is a flow chart of creating the fault prediction model in fig. 1, and as shown in fig. 2, in the IGBT fault prediction method according to this embodiment, specific steps of a process of creating the fault prediction model are as follows:

s201, acquiring a training parameter set from pre-stored historical data.

In this embodiment, historical data is stored in advance, and the historical data includes training data and data actually predicted before. The embodiment needs to obtain the training parameter set from the historical data. Wherein, the training parameter set comprises: and the plurality of groups of model training parameters and each group of model training parameters correspond to training actual output data.

S202, establishing an LACNN model comprising an LSTM layer, an Attention layer, a CNN layer, a conditioner layer and a GRU layer by utilizing the LSTM network, the Attention network, the CNN network and the GRU network.

The embodiment needs to establish the most initial LACNN model by using the LSTM network, the Attention network, the CNN network, and the GRU network, where the LACNN model includes an LSTM layer, an Attention layer, a CNN layer, a conditioner layer, and a GRU layer.

S203, inputting the model training parameters extracted from the training parameter set into the LACNN model, processing the model training parameters by using the LACNN model to obtain a training prediction result output by the LACNN model, and adding one to the pre-recorded current training iteration number.

In this embodiment, a set of model training parameters needs to be extracted from the training parameter set, and the LACNN model is trained by using the set of model training parameters. Namely, the model training parameters are input into the LACNN model, and the model training parameters are processed by using the LACNN model, so that a training prediction result output by the LACNN model is obtained. After the LACNN model is trained by using a set of model training parameters, the number of current training iterations recorded in advance needs to be increased by one, so that the recording of the LACNN model training times is completed.

In this embodiment, the specific steps of inputting the model training parameters extracted from the training parameter set into the LACNN model, and processing the model training parameters by using the LACNN model to obtain the training prediction result output by the LACNN model are as follows:

firstly, model training parameters extracted from a training parameter set are input to an LSTM layer in the LACNN model, so that the LSTM layer learns the model training parameters and outputs first output characteristic information.

In this embodiment, the LSTM layer in the LACNN model is used to extract the time information of the input sequence. The LSTM layer includes a forgetting gate (to discard unnecessary information from previous time steps), an input gate (to select useful information from the inputs) and an output gate (to control the output of the current LSTM network). The gate can selectively determine which information is passed. This enables protection and control of the information with time. In this embodiment, the model training parameters extracted from the training parameter set are input to the LSTM layer in the LACNN model, and the model training parameters can be learned through the forgetting gate, the input gate, and the output gate, so as to output the first output feature information. Specifically, forgetting the door:

ft=σ(Wf·[ht-1,xt]+bf) (1)

an input gate:

an output gate:

where σ is the sigmoid activation function, Tanh is the hyperbolic tangent activation function,indicates the current temporary memory cell, htIndicating a status output unit, ht-1Output representing last time, itTo the input gate, OtTo the output gate, ftTo forget the door, xtRepresenting a time series of moments of a model training parameter t, WfIs shown at the forgetting doorftInner weight, WiWeight, W, expressed in sigmoid activation function of input gateCWeights inside Tanh activation function represented at input gate, bf、bi、bcBoth represent bias terms.

Secondly, the first output characteristic information is input into an Attention layer in the LACNN model, so that the Attention layer processes the first output characteristic information by using an Attention mechanism and outputs second output characteristic information.

In this embodiment, the Attention layer is used to assign a larger weight to more important features or time steps; namely to htWeights are assigned and final results are calculated. The first output characteristic information is input to an Attention layer in the LACNN model, and the Attention layer can process the first output characteristic information by using an Attention mechanism so as to output second output characteristic information.

Specifically, the final operation of attention is equivalent to introducing a fully connected layer with the activation function softmax, outputting a set of weights to represent attention, and then combining the original inputs with the weights. Assume that the first output characteristic information is represented as H ═ H (H)1,h2,h3,…,hd)TAnd T is the transpose operation. Wherein h isi∈RnAnd n is the sequence step number of the feature. According to the self-attention mechanism, the ith input feature hiMay be represented as si=φ(WThi+ b), where W and b are the weight matrix and the deviation vector, respectively. Φ (-) is a scoring function that can be designed in neural networks as activation functions, such as sigmoid and linear. After the score of the ith feature vector is obtained, the normalization can be performed using the softmax function, as follows:

note that the final output characteristic O (i.e., the second output characteristic information) of the mechanism may be expressed as:

wherein A ═ a1,a2,a3,…,ad),Is an operation defined as element multiplication.

In the embodiment, an attention mechanism is specially introduced to learn the importance of the features and the time steps, and a larger weight is allocated to the more important features, so that the model prediction accuracy is greatly improved.

And thirdly, inputting the second output characteristic information into a CNN layer in the LACNN model, so that the CNN layer performs convolution operation on the second output characteristic information and outputs third output characteristic information.

In this embodiment, the CNN layer is used to further extract the deep features of the previous layer result. And inputting second output characteristic information output by the Attention layer into a CNN layer in the LACNN model, and performing convolution operation on the second output characteristic information by the CNN layer to output third output characteristic information. Specifically, the feature map for the CNN layer convolution operation can be formulated as:

wherein O represents second output characteristic information,represents the dot product, wjThe window vector is represented, b ∈ R represents a bias term, f represents a nonlinear transformation function which can be s-shaped, hyperbolic tangent and the like, and in the embodiment, a ReLU function is preferably used as the nonlinear function.

In this embodiment, n filters are used to generate the following feature maps:

W=[c1,c2,c3,…,cn] (8)

wherein, ciIs the ith filterThe generated feature map, W, represents the third output feature information. The convolutional layer may have multiple filters of the same size to learn complementary features, or multiple filters of different sizes.

And fourthly, inputting the second output characteristic information and the third output characteristic information into a corresponding layer in the LACNN model, so that the corresponding layer integrates the second output characteristic information and the third output characteristic information and outputs fourth output characteristic information.

In this embodiment, the coordinate layer is used to integrate the temporal and spatial information to obtain complete features contained in the data. Inputting the second output characteristic information and the third output characteristic information obtained in the above steps into a configuration layer in the LACNN model, where the configuration layer may integrate the second output characteristic information and the third output characteristic information to obtain and output fourth output characteristic information, that is, M ═ O, W, where M is the fourth output characteristic information.

And fifthly, inputting the fourth output characteristic information into a GRU layer in the LACNN model, so that the GRU layer processes the fourth output characteristic information and outputs a training prediction result.

In this embodiment, the GRU layer functions as a full link layer, and replaces one of the original two full link layers, so as to further enhance the nonlinear expression capability of the model. And inputting the fourth output characteristic information into a GRU layer in the LACNN model, wherein the GRU layer is used as a full connection layer to process the fourth output characteristic information and output a training prediction result.

And S204, calculating a training error between the training prediction result and training actual output data corresponding to the model training parameters.

Through the steps, after the training prediction result corresponding to the model training parameter is obtained, the training actual output data corresponding to the model training parameter needs to be extracted, and the training error between the training prediction result and the training actual output data is calculated.

S205, adjusting and updating the LACNN model according to the training error to obtain an updated LACNN model.

And adjusting and updating the LACNN model according to the calculated training error to obtain an updated LACNN model. The adjusting and updating of the LACNN model specifically comprises adjusting each layer of weight in the LACNN model by using an Adam optimization algorithm according to back propagation.

S206, judging whether the current training iteration number reaches the preset training number and whether the training error meets the preset training error range.

In this embodiment, after the training of the LACNN model by the set of model training parameters is completed, it is necessary to determine whether the current training iteration number reaches the preset training number, and determine whether the training error satisfies the preset training error range.

And S207, if the current training iteration number reaches the preset training number or the training error meets the preset training error range, taking the updated LACNN model as a fault prediction model.

And if the current training iteration number reaches the preset training number or the training error meets the preset training error range, the training of the LACNN model is finished, and the finally updated LACNN model is used as a fault prediction model.

And S208, if the current training iteration number does not reach the preset training number and the training error does not meet the preset training error range, continuing to extract model training parameters from the training parameter set, and training the updated LACNN model.

If the current training iteration number does not reach the preset training number and the training error does not meet the preset training error range, the LACNN model does not meet the application requirement, and the updated LACNN model needs to be trained continuously, so that a group of model training parameters need to be extracted again from the training parameter set continuously, and the updated LACNN model is trained by using the group of model training parameters.

Specifically, in this embodiment, table 1 is a structural parameter table of the LACNN model, and as shown in table 1, the number of hidden units in the LSTM layer is 256, and the hidden state values of all time steps are returned (return _ sequences ═ True). The Attention layer is applied behind the LSTM layer, and weights are not shared when calculating scores. The CNN layer filters is 64, kernel _ size is 1, and the convolution kernel is 1, so that cross-channel interaction and information integration can be realized, and the parameters of the convolution kernel are reduced (simplified model). Edge padding, padding ═ same ', activation ═ refill'. Maximum pooling, with a pooling kernel size of 1. The GRU layer hidden node after the merge layer is set to 128, and the hidden state value of the whole time step is returned in order to prevent the loss of information. Using Adam optimization algorithm, the learning rate was set to 0.001. MSE is used as a loss function.

TABLE 1

Fig. 3 is a schematic structural diagram provided by an embodiment of the IGBT fault prediction system of the present invention. As shown in fig. 3, the IGBT failure prediction system of the present embodiment includes: a processor 21 and a memory 22, the processor 21 being connected to the memory 22. The memory 22 is configured to store a computer program, and the computer program is at least configured to execute the IGBT failure prediction method according to the foregoing embodiment; the processor 21 is used to invoke and execute the computer programs stored in the memory 22.

The IGBT fault prediction system of the embodiment takes the collector-emitter voltage, the collector-emitter current, the gate-emitter voltage and the gate voltage as the parameters of the fault prediction, avoids the singleness of the prediction parameters, utilizes the fault prediction model established by the LSTM network, the Attention network, the CNN network and the GRU network, and can improve the comprehensiveness of information extraction compared with the traditional LSTM network model applied in the prior art so as to improve the precision of the fault prediction model and further improve the accuracy of the IGBT fault prediction.

It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.

It should be noted that the terms "first," "second," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present invention, the meaning of "a plurality" means at least two unless otherwise specified.

Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.

It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.

It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.

In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.

The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.

In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.

Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种测试光耦的设备及方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类