Intelligent mutual inductor

文档序号:613559 发布日期:2021-05-07 浏览:12次 中文

阅读说明:本技术 一种智能互感器 (Intelligent mutual inductor ) 是由 丁飞 石颉 杜国庆 苏新雅 胡倩 黄佳悦 朱家坤 申海锋 于 2020-10-23 设计创作,主要内容包括:本发明公开了一种智能互感器,其设计要点在于,包括:电流检测模块、温度检测模块、数据存储模块、远程通信模块、智能诊断模块、智能终端;所述电流检测模块用于对导线电流的检测;所述温度检测模块用于测量智能互感器多个部位的温度;所述数据存储模块用于对电流检测模块、温度检测模块采集到的信息进行存储,用于与智能诊断模块得到的信息交换信息,用于与智能终端交换信息。采用本申请的智能互感器,能够大幅的提高诊断效率与诊断精度,可实现对互感器电流及温度进行实时检测,并可将检测数据进行实时传输及存储,同时互感器自带智能诊断模块,采用ART神经网络融合BP算法,可根据实时数据进行互感器状态诊断。(The invention discloses an intelligent mutual inductor, which has the design points that: the intelligent diagnosis system comprises a current detection module, a temperature detection module, a data storage module, a remote communication module, an intelligent diagnosis module and an intelligent terminal; the current detection module is used for detecting the current of the lead; the temperature detection module is used for measuring the temperatures of a plurality of parts of the intelligent mutual inductor; the data storage module is used for storing the information collected by the current detection module and the temperature detection module, exchanging information with the information obtained by the intelligent diagnosis module and exchanging information with the intelligent terminal. By adopting the intelligent mutual inductor, the diagnosis efficiency and the diagnosis precision can be greatly improved, the current and the temperature of the mutual inductor can be detected in real time, the detected data can be transmitted and stored in real time, meanwhile, the mutual inductor is provided with an intelligent diagnosis module, an ART neural network is adopted to fuse a BP algorithm, and the state diagnosis of the mutual inductor can be carried out according to the real-time data.)

1. An intelligent transformer, comprising: the intelligent diagnosis system comprises a current detection module, a temperature detection module, a data storage module, a remote communication module, an intelligent diagnosis module and an intelligent terminal;

the current detection module is used for detecting the current of the lead;

the temperature detection module is used for measuring the temperatures of a plurality of parts of the intelligent mutual inductor;

the data storage module is used for storing the information acquired by the current detection module and the temperature detection module, exchanging information with the information acquired by the intelligent diagnosis module and exchanging information with the intelligent terminal;

the intelligent diagnosis module reads the data of the data storage module, is used for diagnosing the state of the mutual inductor and can write the diagnosis result into the data storage module;

the intelligent terminal is used for inquiring and displaying the information stored in the data storage module;

the output end of the current detection module is connected with the input end of the data storage module;

the output end of the temperature detection module is connected with the input end of the data storage module;

wherein the data storage module is connected with the intelligent diagnosis module;

the data storage module is connected with the intelligent terminal, and the data storage module is in communication connection with the intelligent terminal through the remote communication module.

2. The intelligent transformer according to claim 1, wherein the current detection module employs patch thermistor sensors, and 8 patch thermistor sensors are averagely attached to the inner ring and the outer ring of the transformer; and a high-resolution analog-to-digital conversion chip is adopted to convert the current signal into a digital signal and then transmit the digital signal to the data storage module.

3. The intelligent mutual inductor according to claim 1, wherein the temperature detection module adopts thermistor sensors, and the thermistor sensors are mounted at a plurality of positions of the mutual inductor to realize the detection of the temperature of the mutual inductor; and 8 patch type thermistor sensors are averagely attached to the inner ring and the outer ring of the mutual inductor to form a Wheatstone bridge for detecting the temperature of each point and simultaneously detecting the difference value between the cable and the ambient temperature, and a high-resolution analog-to-digital conversion chip is adopted for converting a current signal into a digital signal and then transmitting the digital signal to a data storage module.

4. The intelligent transformer of claim 1, wherein the data storage module is a mobile data storage.

5. The intelligent transformer according to claim 1, wherein the intelligent diagnosis module employs a digital processing chip, which can acquire, store, remotely communicate and intelligently diagnose the collected current and temperature information, and employs a metal mesh enclosure to form a shielding enclosure to prevent electromagnetic interference of the digital chip under strong magnetic field.

6. The intelligent mutual inductor according to claim 1, wherein the intelligent diagnosis module stores a mutual inductor calculation model, and the mutual inductor calculation model adopts an ART calculation model;

the ART computational model consists of two layers of neurons including two subsystems: a comparison layer C and a recognition layer R; further comprising: three control signal RESET signals, logic control signals G1 and G2;

wherein, the comparison layer C has n nodes, each node receives signals from 3 aspects: one is an input signal x from the outside worldiThe other is an outward vector T from the R-layer winning neuronjIs returned toijAnd a control signal from G1; the outputs of the nodes in layer C are generated according to the "majority vote" principle of 2/3, i.e. the output value CiAnd xi、tijG1, most of the 3 signals have the same value; when the network starts to operate, G1 is 1, the identification layer does not generate competition winning neurons, so that the feedback return signal is 0, the rule 2/3 shows that the output of the layer C is determined by the input signal, and C is X; when the comparison signal, x, of the feedback loop-back signal and the feedback signal occurs in the identification layeri=tijThen c isi=xiOtherwise ci0; that is, the control signal G1 is used to distinguish different stages of network operation by the comparison layer, the network start operation stage G1 is used to make the C layer directly output the input signal, and then G1 is used to make the C layer perform the comparison function, at this moment CiIs to xiAnd tijWhen both are 1, ciIs 1, otherwise is 0, i.e. signal t returned from the R layerijThe output of the C layer is regulated;

the identification layer R is composed of a multi-layer feedforward neural network and is provided with m nodes for representing m input mode classes, and m can be dynamically increased to establish a new mode class; the internal weight vector connected from the C layer to the jth node of R is represented by Rj=(k1f,k2j,......knf) Represents; the output vector C of the C layer is along m inner weight vectors Rj(j is 1, 2, …, m), forwards, generates a winning node j after competition after reaching each neuron node of the R layer, and indicates the category of the input mode; winning node output rjThe output of the other nodes is 0;

each neuron of the R layer corresponds to two weight vectors: one is an internal weight vector R for converging C-layer feedforward signals to R layerj(ii) a The other is an outer weight vector T for distributing R-layer feedback signals to C-layerjThe vector is a typical vector corresponding to each mode class node of the R layer;

the control signals G1, G2, Reset respectively have the following functions: g1 indicates that the logical or of the X elements in the input pattern is X0, and the logical or of the R elements is R0, then G1 equals X0R0, that is, G1 equals 1 only when the R-layer output vectors R are all 0 and the input X are not all 0, otherwise G1 equals 0; the signal G2 detects whether the input pattern X is 0, which is equal to the logical OR of the X components, if X isi(i-1, 2, …, n) is all 0, then G2 is 0, otherwise G2 is 1; the response signal is effective when the R layer competition winning neuron is ineffective, if according to a certain preset measurement standard, TjIf the similarity between the X and the X is not equal to the preset similarity rho, the two are not sufficiently close to each other, and then the system sends out a Reset signal to invalidate the winning neuron;

the input layer is responsible for receiving external information and transmitting an input sample to the competition layer to play an observation role, the competition layer is responsible for analyzing and comparing, analyzing according to a known training model and correctly classifying, and if the result obtained by analysis does not exist in the known model, a new category is automatically created; the control signal is responsible for controlling the similarity rho of the analysis result of each layer, and if the result does not reach the preset similarity rho, the analysis is carried out again.

7. The intelligent transformer according to claim 6, wherein the ART calculation model operates by the following process:

when the network runs, receiving an input sample from an environment, checking the matching degree between the input sample and all categories of the R layer, and for the sample with the highest matching degree, continuously examining the similarity degree between a typical vector of the sample and the current input mode by the network; the similarity is examined according to a pre-designed reference threshold, and the occurrence conditions are not two:

firstly, if the similarity exceeds a reference threshold, selecting the mode class as a representative class of the current input mode; the weight adjustment rule is that the mode class with the similarity exceeding the reference threshold adjusts the corresponding internal and external weight vectors so as to obtain larger similarity when meeting a sample close to the current input mode in the future and make no change to other weight vectors;

if the similarity does not exceed the threshold value, the similarity of the pattern spills with the next highest matching degree of the R layer is inspected, if the similarity exceeds the reference threshold, the operation of the network returns to the condition 1, otherwise, the network still returns to the condition 2; the operation returns to the case 2 repeatedly, which means that the similarity between all the mode classes and the current input mode does not exceed the reference threshold, and at the moment, a node representing the new mode class needs to be established at the network output end to represent and store the mode so as to participate in the subsequent matching process;

the network follows the above operation process for each new input sample received; for each input, the mode network operation process can be generalized into three stages, namely an identification stage, a comparison stage and a search stage:

(1) identification phase

Before the network has no input mode, the network is in a waiting state; at this time, the input X is equal to 0, and the concatenation control signal G2 is equal to 0; therefore, the output of the R layer units is all 0, and the same winning chance exists in competition; when the network input is not all 0, setting G2 to be 1; information flows from bottom to top, G1 ═ G2R0 ═ 1, as can be seen by the 2/3 rule, when the C layer output C ═ X, and C feeds upward, acting with an upward weight vector B spills, producing a vector T, which feeds upward into the R layer, causing the R layer to start competing inside; assuming that the winning node is j, the R-level output Rj is 1, and the other node outputs are 0;

(2) comparison phase

The output information of the R layer returns to the C layer from top to bottom, and Rj is 1, so that Tj from top to bottom connected with the j node of the R layer is activated and returns to the C layer downwards;

at this time, the R-layer outputs are not all 0, and G1 is 0, so the next output C' of the C layer depends on the weight vector Tj from top to bottom of the R layer and the input pattern X of the network;

testing similarity spills by using a threshold specified in advance, if C' gives enough similar information, indicating that competition is correct, otherwise, indicating that the competition result does not meet the requirement, sending a Reset signal to invalidate the last winning node, and enabling the node not to win any more in the matching process of the mode; then entering a searching stage;

(3) search phase

Starting invalid setting of a winning stage by a Reset signal, and entering a searching stage by a network spills, wherein R is all 0, G1 is 1, and the current input mode X is obtained at the output end of the C layer; therefore, the network spills into the identification and comparison stage to obtain a new winning node; repeating the steps until a certain winning node K is searched and is fully matched with the input vector X to meet the requirement, compiling the mode X into the mode category connected with the R-layer K nodes, namely modifying the weight vectors of the points from bottom to top and from top to bottom according to a certain method, and adding an R-layer node to represent the mode of the X or the mode close to the X if the network meets the X or does not find the mode close to the X if all R-layer output nodes are searched;

if the reference threshold is greater than rho, accepting j as a winning node, modifying weight vectors of the R layer nodes from bottom to top and from top to bottom to make the X similar inputs more easily obtained later, recovering the R layer nodes restrained by Reset signals with higher similarity, and turning to a comparison stage to meet the next input; otherwise, a Reset signal is sent, j is set to be 0, and the search phase is started.

8. The intelligent mutual inductor according to claim 6, wherein the recognition layer R is a feedforward neural network model, that is, a multilayer feedforward neural network composed of two layers of neurons by using a BP neural network algorithm;

the feedforward neural network comprises an input layer formed by 10 neurons, a hidden layer formed by 10 neurons, an output layer formed by 2 outputs, and the input layer corresponds to the hidden layer: temperature (T), voltage (V), load (VA), load ratio (%), current phase difference, composite error, deviation; the output layer corresponds to: operating state, remaining life.

9. The intelligent mutual inductor according to claim 8, wherein the training method of the feedforward neural network model comprises the following steps:

firstly, data acquisition: the method comprises the following steps of carrying out operation experiments through a common sensor to obtain m groups of parameters under different operation states, recording the parameters as a data set D, wherein each group of parameters comprises X1-Xi and is recorded as a vector X, the operation state comprises Y1-yj (the operation state can be manually divided into j states, and j classification learning can also be carried out through unsupervised learning), recording the parameters as a vector Y, and judging the operation state of the mutual inductor to convert the operation state into a j classification task with i characteristic parameters:

secondly, sampling data obtained by an experiment by using a self-service sampling method, and dividing a training set and a testing set: in particular, given a data set D containing m samples, sampling it produces a data set D': randomly picking a sample from D each time, copying the sample into D', and then putting the sample back into the initial data set D, so that the sample can still be picked when sampling next time; after the process is repeatedly executed m times, a data set D' containing m samples is obtained, and the result is the self-help sampling result;

obviously, some samples in D will appear in D' multiple times, while another part of samples will not appear; a simple estimate can be made that the probability that a sample will never be taken in m samples is (1-1/m)mWhen m is infinite, (1-1/m)m1/e, D' is a training set of the machine learning model,to test the set

Thirdly, training is carried out: for each training sample, the BP algorithm executes the following operations of providing the input sample to an input layer neuron, and then forwarding signals layer by layer until a result of an output layer is generated; then calculating the error of an output layer, reversely transmitting the error to a hidden layer nerve, and finally adjusting the connection weight and the threshold value according to the error of the hidden layer nerve cell; the iterative process loops until a stop condition is reached.

Technical Field

The invention belongs to the field of intelligent diagnosis of power equipment, and relates to an intelligent diagnosis system of a power transformer and a diagnosis method based on decision tree classification.

Background

The mutual inductor is widely applied to a modern power grid, the operation reliability and the performance stability of the mutual inductor have great influence on the stable and reliable operation of the power grid, so that a plurality of solutions are provided for health management problems of fault diagnosis, aging, service life and the like of the power mutual inductor in the academic world, but a large number of problems are still unsolved at present, and most of the existing solutions do not produce commercial technical products.

For example, in patent CN103531340A, the temperature detection of the transformer only collects one point (in this solution 1, only one temperature detection point is set at 5), but in actual operation, because the measured wire cannot make the transformer uniformly receive the magnetic field (as shown in fig. 1, the magnetic field distribution of the wire in different operation states), this causes different temperatures generated by the current effect at each point of the transformer.

In addition, due to the influence of the production process, the specific heat capacity and the heat conduction coefficient of materials of all layers of the mutual inductor are different, so that the single-point temperature of the mutual inductor cannot reflect the whole condition.

At present, a mathematical model is obtained by analyzing the existing data in the fault diagnosis of the mutual inductor, the accuracy of the method in the initial operation stage of the mutual inductor is high, but as the operation time of equipment increases, the mathematical model is misaligned due to material aging, mechanical vibration, electromagnetic interference and the like, so that misjudgment or missed judgment is caused, and the problem needs to be continuously researched.

Disclosure of Invention

The invention aims to provide an intelligent transformer aiming at the defects of the prior art.

An intelligent transformer, comprising: the intelligent diagnosis system comprises a current detection module, a temperature detection module, a data storage module, a remote communication module, an intelligent diagnosis module and an intelligent terminal;

the current detection module is used for detecting the current of the lead;

the temperature detection module is used for measuring the temperatures of a plurality of parts of the intelligent mutual inductor;

the data storage module is used for storing the information acquired by the current detection module and the temperature detection module, exchanging information with the information acquired by the intelligent diagnosis module and exchanging information with the intelligent terminal;

the intelligent diagnosis module reads the data of the data storage module, is used for diagnosing the state of the mutual inductor and can write the diagnosis result into the data storage module;

the intelligent terminal is used for inquiring and displaying the information stored in the data storage module;

the output end of the current detection module is connected with the input end of the data storage module;

the output end of the temperature detection module is connected with the input end of the data storage module;

wherein the data storage module is connected with the intelligent diagnosis module;

the data storage module is connected with the intelligent terminal, and the data storage module is in communication connection with the intelligent terminal through the remote communication module.

Furthermore, the current detection module adopts patch type thermistor sensors, and 8 patch type thermistor sensors are averagely attached to the inner ring and the outer ring of the mutual inductor; the current signal is converted into a digital signal by a high-resolution analog-to-digital conversion chip and then transmitted to the data storage module 3.

Furthermore, the temperature detection module adopts thermistor sensors, the thermistor sensors are arranged at a plurality of positions of the mutual inductor to realize the detection of the temperature of the mutual inductor, in addition, 8 surface-mounted thermistor sensors are averagely attached to the inner ring and the outer ring of the mutual inductor to form a Wheatstone bridge for detecting the temperature of each point and simultaneously detecting the difference value between the cable and the ambient temperature, a high-resolution analog-to-digital conversion chip is adopted to convert a current signal into a digital signal, and then the digital signal is transmitted to the data storage module.

Further, the data storage module is a mobile data storage.

Furthermore, the intelligent diagnosis module adopts a digital processing chip, can acquire, store, remotely communicate and intelligently diagnose the collected current and temperature information, and simultaneously adopts a metal mesh enclosure to form a shielding enclosure to prevent electromagnetic interference of the digital chip under the condition of a strong magnetic field.

Further, the intelligent diagnosis module stores a mutual inductor calculation model, and the mutual inductor calculation model adopts an ART calculation model;

the ART computational model consists of two layers of neurons including two subsystems: a comparison layer C and a recognition layer R; further comprising: three control signal RESET signals, logic control signals G1 and G2;

wherein, the comparison layer C has n nodes, each node receives signals from 3 aspects: one is an input signal x from the outside worldiThe other is an outward vector T from the R-layer winning neuronjIs returned toijAnd a control signal from G1; the outputs of the nodes in layer C are generated according to the "majority vote" principle of 2/3, i.e. the output value CiAnd xi、tijG1, most of the 3 signals have the same value; when the network starts to operate, G1 is equal to 1, the identification layer does not generate competition winning neurons, and therefore a feedback message is fed backNumber 0, rule 2/3, the layer C output should be determined by the input signal, where C is X; when the comparison signal, x, of the feedback loop-back signal and the feedback signal occurs in the identification layeri=tijThen c isi=xiOtherwise ci0; that is, the control signal G1 is used to distinguish different stages of network operation by the comparison layer, the network start operation stage G1 is used to make the C layer directly output the input signal, and then G1 is used to make the C layer perform the comparison function, at this moment CiIs to xiAnd tijWhen both are 1, ciIs 1, otherwise is 0, i.e. signal t returned from the R layerijThe output of the C layer is regulated;

the identification layer R is composed of a multi-layer feedforward neural network and is provided with m nodes for representing m input mode classes, and m can be dynamically increased to establish a new mode class; the internal weight vector connected from the C layer to the jth node of R is represented by Rj=(k1f,h2f,......hnf) Represents; the output vector C of the C layer is along m inner weight vectors Rj(j is 1, 2, …, m), forwards, generates a winning node j after competition after reaching each neuron node of the R layer, and indicates the category of the input mode; winning node output rjThe output of the other nodes is 0;

each neuron of the R layer corresponds to two weight vectors: one is an internal weight vector R for converging C-layer feedforward signals to R layerj(ii) a The other is an outer weight vector T for distributing R-layer feedback signals to C-layerjThe vector is a typical vector corresponding to each mode class node of the R layer;

the control signals G1, G2, Reset respectively have the following functions: g1 indicates that the logical or of the X elements in the input pattern is X0, and the logical or of the R elements is R0, then G1 equals X0R0, that is, G1 equals 1 only when the R-layer output vectors R are all 0 and the input X are not all 0, otherwise G1 equals 0; the signal G2 detects whether the input pattern X is 0, which is equal to the logical OR of the X components, if X isi(i-1, 2, …, n) is all 0, then G2 is 0, otherwise G2 is 1; the Reset signal is applied when the R layer competition wins the neuron ineffectively if based on some kind of advanceSet measurement standard, TjIf the similarity between the X and the X is not equal to the preset similarity rho, the two are not sufficiently close to each other, and then the system sends out a Reset signal to invalidate the winning neuron;

the input layer is responsible for receiving external information and transmitting an input sample to the competition layer to play an observation role, the competition layer is responsible for analyzing and comparing, analyzing according to a known training model and correctly classifying, and if the result obtained by analysis does not exist in the known model, a new category is automatically created; the control signal is responsible for controlling the similarity rho of the analysis result of each layer, and if the result does not reach the preset similarity rho, the analysis is carried out again.

Further, the operation flow of the ART calculation model is as follows:

when the network runs, receiving an input sample from an environment, checking the matching degree between the input sample and all categories of the R layer, and for the sample with the highest matching degree, continuously examining the similarity degree between a typical vector of the sample and the current input mode by the network; the similarity is examined according to a pre-designed reference threshold, and the occurrence conditions are not two:

firstly, if the similarity exceeds a reference threshold, selecting the mode class as a representative class of the current input mode; the weight adjustment rule is that the mode class with the similarity exceeding the reference threshold adjusts the corresponding internal and external weight vectors so as to obtain larger similarity when meeting a sample close to the current input mode in the future and make no change to other weight vectors;

if the similarity does not exceed the threshold value, the similarity of the pattern spills with the next highest matching degree of the R layer is inspected, if the similarity exceeds the reference threshold, the operation of the network returns to the condition 1, otherwise, the network still returns to the condition 2; the operation returns to the case 2 repeatedly, which means that the similarity between all the mode classes and the current input mode does not exceed the reference threshold, and at the moment, a node representing the new mode class needs to be established at the network output end to represent and store the mode so as to participate in the subsequent matching process;

the network follows the above operation process for each new input sample received; for each input, the mode network operation process can be generalized into three stages, namely an identification stage, a comparison stage and a search stage:

(1) identification phase

Before the network has no input mode, the network is in a waiting state; at this time, the input X is equal to 0, and the concatenation control signal G2 is equal to 0; therefore, the output of the R layer units is all 0, and the same winning chance exists in competition; when the network input is not all 0, setting G2 to be 1; information flows from bottom to top, G1 ═ G2R0 ═ 1, as can be seen by the 2/3 rule, when the C layer output C ═ X, and C feeds upward, acting with an upward weight vector B spills, producing a vector T, which feeds upward into the R layer, causing the R layer to start competing inside; assuming that the winning node is j, the R-level output Rj is 1, and the other node outputs are 0;

(2) comparison phase

The output information of the R layer returns to the C layer from top to bottom, and Rj is 1, so that Tj from top to bottom connected with the j node of the R layer is activated and returns to the C layer downwards;

at this time, the R-layer outputs are not all 0, and G1 is 0, so the next output C' of the C layer depends on the weight vector Tj from top to bottom of the R layer and the input pattern X of the network;

testing similarity spills by using a threshold specified in advance, if C' gives enough similar information, indicating that competition is correct, otherwise, indicating that the competition result does not meet the requirement, sending a Reset signal to invalidate the last winning node, and enabling the node not to win any more in the matching process of the mode; then entering a searching stage;

(3) search phase

Starting invalid setting of a winning stage by a Reset signal, and entering a searching stage by a network spills, wherein R is all 0, G1 is 1, and the current input mode X is obtained at the output end of the C layer; therefore, the network spills into the identification and comparison stage to obtain a new winning node; repeating the steps until a certain winning node K is searched and is fully matched with the input vector X to meet the requirement, compiling the mode X into the mode category connected with the R-layer K nodes, namely modifying the weight vectors of the points from bottom to top and from top to bottom according to a certain method, and adding an R-layer node to represent the mode of the X or the mode close to the X if the network meets the X or does not find the mode close to the X if all R-layer output nodes are searched;

if the reference threshold is greater than rho, accepting j as a winning node, modifying weight vectors of the R layer nodes from bottom to top and from top to bottom to make the X similar inputs more easily obtained later, recovering the R layer nodes restrained by Reset signals with higher similarity, and turning to a comparison stage to meet the next input; otherwise, a Reset signal is sent, j is set to be 0, and the search phase is started.

Further, the identification layer R is a feedforward neural network model, namely a multilayer feedforward neural network formed by two layers of neurons by adopting a BP neural network algorithm;

the feedforward neural network comprises an input layer formed by 10 neurons, a hidden layer formed by 10 neurons, an output layer formed by 2 outputs, and the input layer corresponds to the hidden layer: temperature (T), voltage (V), load (VA), load ratio (%), current phase difference, composite error, deviation; the output layer corresponds to: operating state, remaining life;

further, the training method of the feedforward neural network model comprises the following steps:

firstly, data acquisition: the method comprises the following steps of carrying out operation experiments through a common sensor to obtain m groups of parameters under different operation states, recording the parameters as a data set D, wherein each group of parameters comprises X1-Xi and is recorded as a vector X, the operation state comprises Y1-yj (the operation state can be manually divided into j states, and j classification learning can also be carried out through unsupervised learning), recording the parameters as a vector Y, and judging the operation state of the mutual inductor to convert the operation state into a j classification task with i characteristic parameters:

secondly, sampling data obtained by an experiment by using a self-service sampling method, and dividing a training set and a testing set: in particular, given a data set D containing m samples, sampling it produces a data set D': randomly picking a sample from D each time, copying the sample into D', and then putting the sample back into the initial data set D, so that the sample can still be picked when sampling next time; after the process is repeatedly executed m times, a data set D' containing m samples is obtained, and the result is the self-help sampling result;

obviously, some samples in D will appear in D' multiple times, while another part of samples will not appear; a simple estimate can be made that the probability that a sample will never be taken in m samples is (1-1/m)mWhen m is infinite, (1-1/m)m1/e, D' is a training set of the machine learning model,to test the set

Thirdly, training is carried out: for each training sample, the BP algorithm executes the following operations of providing the input sample to an input layer neuron, and then forwarding signals layer by layer until a result of an output layer is generated; then calculating the error of an output layer, reversely transmitting the error to a hidden layer nerve, and finally adjusting the connection weight and the threshold value according to the error of the hidden layer nerve cell; the iterative process loops until a stop condition is reached.

The application has the advantages that: the intelligent mutual inductor can realize real-time detection of current and temperature of the mutual inductor, can transmit and store detection data in real time, is provided with an intelligent diagnosis module, adopts an ART neural network fusion BP algorithm, and can perform mutual inductor state diagnosis according to the real-time data.

Drawings

The invention will be further described in detail with reference to examples of embodiments shown in the drawings to which, however, the invention is not restricted.

Figure 1 is a layout of prior art CN 103531340A.

Fig. 2 is a diagram of the magnetic field distribution of the wire in different operating states.

Fig. 3 is a configuration diagram of a multilayer feedforward neural network of the present application.

FIG. 4 is a diagram of threshold control functions of the hidden layer and the output layer of the present application.

Fig. 5 is an ART neural network of the present application.

Fig. 6 is a configuration diagram of the intelligent transformer.

The reference numerals in fig. 2-6 are illustrated as follows:

the intelligent diagnosis system comprises a current detection module 1, a temperature detection module 2, a data storage module 3, a remote communication module 4, an intelligent diagnosis module 5 and an intelligent terminal 6.

Detailed Description

Example 1: an intelligent transformer, comprising: the intelligent diagnosis system comprises a current detection module 1, a temperature detection module 2, a data storage module 3, a remote communication module 4, an intelligent diagnosis module 5 and an intelligent terminal 6;

the current detection module 1 is used for detecting the current of the lead, the current detection module can adopt patch type thermistor sensors, and 8 patch type thermistor sensors are averagely attached to the inner ring and the outer ring of the mutual inductor;

8 SMD thermistor sensors still constitute the difference that the wheatstone bridge detected each point temperature and detected cable and ambient temperature simultaneously, adopt high resolution analog-to-digital conversion chip to convert current signal into digital signal and be used for data storage, remote communication and intelligent diagnosis.

The temperature detection module 2 adopts a thermistor sensor, thermistors are arranged at different positions of the mutual inductor to detect the temperature of the mutual inductor, and a high-resolution analog-to-digital conversion chip is adopted to convert a current signal into a digital signal for data storage, remote communication and intelligent diagnosis.

The data storage module 3 is a data storage (for example, an SD card, a mobile hard disk, a U disk), and can store collected information, for example: the data memory can be selected from a pluggable type with selectable capacity and used for storing data to prevent loss and distortion of detection data caused by communication interruption or signal interference, and can be read again after communication is recovered, or copied data can be manually taken out.

The telecommunications module 4 employs common devices such as: the method comprises the following steps that wifi, CDMA and other modes are adopted, namely, common field communication protocols and free protocols are adopted, protocol selection is carried out through a dial switch, the requirements of common market components are adapted, and collected data are transmitted to an intelligent terminal through a communication network; the free protocol supports the communication protocol defined by the user to realize the safe encryption of data transmission.

The intelligent diagnosis module 5 reads the data of the data storage module 3 for diagnosing the state of the mutual inductor. Specifically, the intelligent diagnosis module 5 includes a transformer model; a mathematical model is established for manufacturing parameters and operation data of the existing mutual inductor, the mutual inductor model is learned and trained through an artificial intelligence algorithm, a machine learning model which is commonly used is established, meanwhile, a data iteration algorithm is adopted, and during equipment operation, iterative optimization is carried out on the machine learning model while fault diagnosis is carried out, so that the fault diagnosis accuracy is continuously improved.

The intelligent diagnosis module 5 can adopt a digital processing chip, can acquire, store, remotely communicate and intelligently diagnose the collected current and temperature information, and simultaneously adopts a metal mesh enclosure to form a shielding enclosure to prevent electromagnetic interference of the digital chip under the condition of a strong magnetic field.

For the mathematical model of the intelligent diagnostic module 5: a mathematical model is established for manufacturing parameters and operation data of the existing mutual inductor, a machine learning model which is commonly used is established by learning and training the mutual inductor model through an artificial intelligence algorithm, and meanwhile, a data iteration algorithm is adopted, so that during equipment operation, the machine learning model is iteratively optimized while fault diagnosis is carried out, and the accuracy of fault diagnosis is continuously improved;

the establishment of the mathematical model is to collect and analyze the material parameters in the production process of the prior common mutual inductor, and comprises the following steps: thickness, size, magnetic conductivity, specific heat capacity, interlayer gap of the silicon steel sheet, and magnetic conductivity, specific heat capacity, heat transfer coefficient and expansion rate of the shaping resin; and analyzing and determining a mathematical model among the material, the temperature and the current change of the transformer through the material parameters of the common transformer and the current and the temperature parameters in the operation detection experiment.

For the machine learning model, a BP neural network algorithm and an ART neural network are mainly adopted, and model training is carried out on a training set of random sampling strokes to construct the machine learning model with strong generalization capability on new data.

Performing operation experiment by using a common sensor to obtain m groups of parameters under different operation states, and recording the parameters as a data set D, wherein each group of parameters is represented by X1~XiComposition, denoted as vector X, running state by y1~yjThe method comprises the following steps of (j states can be manually divided in the running state, and j classification learning can also be carried out through unsupervised learning), recording the j classification learning as a vector Y, and converting the running state judgment of the mutual inductor into a j classification task with i characteristic parameters:

firstly, sampling data obtained by an experiment by using a self-service sampling method, and dividing a training set and a testing set: in particular, given a data set D containing m samples, we sample it to produce a data set D': randomly picking a sample from D each time, copying the sample into D', and then putting the sample back into the initial data set D, so that the sample can still be picked when sampling next time; after the process is repeatedly executed m times, a data set D' containing m samples is obtained, and the result is self-service sampling. Obviously, some samples in D will appear in D' multiple times, while some samples will not appear. A simple estimate can be made that the probability that a sample will never be taken in m samples is (1-1/m)mWhen m is infinite, (1-1/m)m1/e, D' and a training set for machine learning models,is a test set;

secondly, a BP neural network algorithm is adopted, and a multilayer feedforward neural network is formed by two layers of neurons. Specifically, given a training set D ═ X, Y, i.e., input samples are described by i feature attributes, a j-dimensional real-valued vector is output. For the convenience of discussion, fig. 3 shows a multi-layer feedforward network structure with i input neurons, j output neurons, and q hidden neurons, where the threshold of the output layer d-th neuron is represented by θdThe threshold of the h-th neuron of the hidden layer is expressed by muhIndicating that between the d-th neuron of the output layer and the h-th neuron of the hidden layerConnection weight of omegahdThe weight of the connection between the h-th neuron of the hidden layer and the t-th neuron of the input layer is Yth. Input received by h neuron of memory layerInput received by the d-th neuron of the output layer

Third, the hidden layer and the output layer both use the sigmoid (x) function (see fig. 4), so that the hidden layer neuron outputs bh=Sigmoid(αhh) Output layer neuron output yd=Sigmoid(βdd). For training example (X)K,YK) Assuming that the output of the neural network isI.e. yx=Sigmoid(βdd) Then the neural network is in (X)K,YK) Mean square error ofThere are (i + j +1) × q + j parameters to be determined in the whole neural network: i × q weights from the input layer to the hidden layer, q × j weights from the hidden layer to the output layer, q hidden layer neuron thresholds, and j output neuron thresholds.

Fourthly, BP is an iterative learning algorithm, and parameters are updated and estimated by adopting a generalized perceptron learning rule in each iteration. The estimation update form of δ of an arbitrary parameter is δ ← δ + Δ δ. Determining (i + j +1) xq + j parameters in the neural network through a BP algorithm on a training set by specified iteration times to obtain a machine learning model F (x), verifying various performance indexes of the learning model through a test set, determining the learning model if the learning model reaches the standard, and increasing the iteration times if the performance indexes do not reach the standard until the performance reaches the standard.

Fifth, a work flow diagram (see fig. 5) of the BP algorithm is given below, and for each training sample, the BP algorithm performs the following operations of providing the input sample to neurons in an input layer, and then forwarding signals layer by layer until a result of an output layer is generated; and then calculating the error of an output layer, reversely transmitting the error to the hidden layer nerve, and finally adjusting the connection weight and the threshold according to the error of the hidden layer neuron. The iterative process loops until some stopping condition is reached, e.g., the training error has reached a small value.

Inputting: training set D ═ { X, Y }; x ═ X1,X2,…,XK]],Y=[Y1,Y2,...,YK]

Learning rate eta

The BP algorithm workflow is as follows:

and (3) outputting: multilayer feedforward neural network for determining connection weight and threshold value F (x)

Transplanting a neural network machine model F (x) obtained by training on a common transformer data set to an intelligent transformer. And F (x) is used as a hidden layer of an ART (adaptive resonance theory) neural network, and unsupervised incremental learning or online learning is carried out while state discrimination is carried out when the intelligent transformer operates.

For ART neural networks, they are constructed as follows:

first, ART consists of two layers of neurons forming two subsystems: a comparison layer C and a recognition layer R; there are three control signal RESET, logic control signals G1 and G2 (fig. 5).

Second, the comparison layer C has n nodes, each receiving signals from 3 aspects: one is an input signal x from the outside worldiThe other is an outward vector T from the R-layer winning neuronjIs returned toijThere is also a control signal from G1. The outputs of the nodes in layer C are generated according to the "majority vote" principle of 2/3, i.e. the output value CiAnd xi、tijG13 signalsMost of the signal values of (a) are the same. When the network starts to operate, G1 is 1, and the identification layer has not generated a competitive winning neuron, so the feedback return signal is 0, and the rule 2/3 shows that the output of the layer C is determined by the input signal, and there is X. When the comparison signal, x, of the feedback loop-back signal and the feedback signal occurs in the identification layeri=tijThen c isi=xiOtherwise ci0. It can be seen that the control signal G1 is used to distinguish different stages of network operation from each other by the comparison layer, the network start operation stage G1 is used to make the C layer output the input signal directly, and then G1 is used to make the C layer perform the comparison function, at this time CiIs to xiAnd tijWhen both are 1, ciIs 1, otherwise is 0. It can be seen that the signal l returns from the R layerijThe output of the C layer is regulated.

Thirdly, the recognition layer R is composed of the aforementioned multilayer feedforward neural network. There are m nodes to represent m input pattern classes, and m can be dynamically incremented to set up a new pattern class. The internal weight vector connected from layer C to the jth node of R is used as Bj=(b1j,b2j,......bnj) And (4) showing. The output vector C of layer C is along m inner weight vectors EjAnd (j is 1, 2, …, m), and after reaching each neuron node of the R layer, the node generates a winning node j after competition, and indicates the category of the input mode. Winning node output rjThe remaining node outputs are 0. Each neuron of the R layer corresponds to two weight vectors: one is an inner weight vector B for converging the C-layer feedforward signal to the R-layerj(ii) a The other is an outer weight vector T for distributing R-layer feedback signals to C-layerjThe vector is a typical vector corresponding to each mode class node of the R layer.

Fourth, the control signals G1, G2, Reset respectively function as: g1 indicates that the logical or of the X elements in the input pattern is X0, and the logical or of the R elements is R0, then G1 equals X0R0, that is, G1 equals 1 only when the R-layer output vectors R are all 0 and the input X are not all 0, otherwise G1 equals 0; the signal G2 detects whether the input pattern X is 0, which is equal to the logical OR of the X components, if X isi(i=1,2,…,n) is all 0, then G2 is 0, otherwise G2 is 1; the response signal is effective when the R layer competition winning neuron is ineffective, if according to a certain preset measurement standard, TjFailure to reach the predetermined similarity ρ with X indicates that the two are not sufficiently close, and the system signals Reset to invalidate the winning neuron.

And fifthly, the input layer is responsible for receiving external information and transmitting the input sample to the competition layer to play an observation role, the competition layer is responsible for analyzing and comparing, analyzing according to a known training model, correctly classifying, and if the result obtained by analyzing does not exist in the known model, automatically creating a new category. The control signal is responsible for controlling the similarity rho of the analysis result of each layer, and if the result does not reach the preset similarity rho, the analysis is carried out again.

Sixth, the flow of ART calculation model operation is given below:

inputting: inputting a sample: x ═ X1,...,xi,...,xn]

Reference threshold: similarity ρ

The process is as follows:

and (3) outputting: compete for the correct RjInputting: training set D ═ { X, Y }; x ═ X1,X2,...,XK],Y=[Y1,Y2,...,YK]

Learning rate n

The process is as follows:

and (3) outputting: multilayer feedforward neural network for determining connection weight and threshold value F (x)

Specifically, the network runtime accepts an input sample from the environment and checks the degree of matching between the input sample and all classes in the R layer, and for the sample with the highest degree of matching, the network continues to examine the degree of similarity between the typical vector of the sample and the current input pattern. The similarity is examined according to a pre-designed reference threshold, and there are no two possible situations:

(1) and if the similarity exceeds a reference threshold, selecting the mode class as a representative class of the current input mode. The weight adjustment rule is that the mode class with the similarity exceeding the reference threshold adjusts the corresponding internal and external weight vectors so as to obtain larger similarity when meeting the sample close to the current input mode in the future and do not change other weight vectors.

(2) And if the similarity does not exceed the threshold value, inspecting the similarity of the pattern spills with the next highest matching degree of the R layer, returning to the case 1 if the similarity exceeds the reference threshold, and otherwise, still returning to the case 2. It is conceivable that the operation returns to the case 2 repeatedly, which means that eventually all the pattern classes have similarity to the current input pattern not exceeding the reference threshold, and at this time, a node representing the new pattern class needs to be established at the network output end to represent and store the pattern so as to participate in the subsequent matching process.

The network performs the above operation for each new input sample received. For each input, the pattern network operation process can be generalized into three stages, namely an identification stage, a comparison stage and a search stage.

(1) Identification phase

The network is in a wait state before it has no input mode. At this time, the input X is 0, and the concatenation control signal G2 is 0. Therefore, the output of the R-layer units is all 0, and the same winning chance is obtained in competition. When the network inputs are not all 0, setting G2 to 1. Information flows from bottom to top, G1 ═ G2R0 ═ 1, as can be seen by the 2/3 rule, when the C layer output C ═ X, and C feeds upward, acting with an upward weight vector B spills, producing a vector T, which feeds upward into the R layer, causing the R layer to start competing inside. Assuming that the winning node is j, the R-level output Rj is 1, and the other node outputs are 0.

(2) Comparison phase

And the output information of the R layer returns to the C layer from top to bottom, and Rj is equal to 1, so that Tj from top to bottom connected with the j node of the R layer is activated and returns to the C layer from bottom to top.

At this time, the R-layer outputs are not all 0, and G1 is 0, so the C-layer next output C' depends on the weight vector Tj from the R-layer top to bottom and the input pattern X of the network.

And testing the similarity spills by using a threshold specified in advance, if C' gives sufficiently similar information, indicating that the competition is correct, otherwise, indicating that the competition result is not satisfactory, sending a Reset signal to invalidate the last winning node, and enabling the node not to win any more in the matching process of the mode. And then enters a search phase.

(3) Search phase

Starting from the invalidation of winning phase by Reset signal, the network spills into the searching phase, at this time, R is all 0, G1 is 1, and the input mode X of this time is obtained at the output end of the C layer. Therefore, the network spills into the identification and comparison stage to obtain a new winning node (the former winning node does not participate in competition). Repeating the steps until a certain winning node K is searched and is fully matched with the input vector X to meet the requirement, compiling the pattern X into the pattern category connected with the R-layer K nodes, namely modifying the weight vectors of the points from bottom to top and from top to bottom according to a certain method, and adding an R-layer node to represent the pattern of the X or the pattern close to the X if the network meets the X or does not find the pattern close to the X if all the R-layer output nodes are searched.

If the reference threshold is greater than rho, then j is accepted as a winning node, the weight vectors of the R layer nodes from bottom to top and from top to bottom are modified, so that the X similar input is easier to obtain later, the R layer nodes with higher similarity and restrained by Reset signals are restored, and the R layer nodes go to a comparison stage to meet the next input. Otherwise, a Reset signal is asserted, setting j to 0 (not allowing it to participate in the contention), and the search phase is started.

The research flow of the intelligent transformer is given as follows:

1. taking the transformer produced in a certain factory as an example, the parameters of the transformer are shown in the following table 1, the transformers on the same production line are sampled, and the type test of the common transformers is carried out. Obtaining mutual inductors by comprehensive analysis of test dataMathematical model Tj[t]=f(Xi). Taking resistance measurement and error measurement tests as examples, statistical data are shown in table 2, and the relationship between the resistance of the transformer and the secondary current and temperature obtained by analyzing the data in table 2 is as follows:

R=(0.053592T|12.5676)×e(I-1) (1)

the relationship between phase difference and composite error is:

y=2.8222x6 34.685x4|164.53x3 385.75x2|446.73x 204.13 (2)

the sum of the composite error and the deviation is:

y=8128.2x4 5171.3x3|1201.2x2 122.62x|4.5483 (3)

the relationship between the load and the current ratio difference is:

y=0.04741n(x) 0.666 (4)

the relationship between the load and the current phase difference is as follows:

y=2.3215x 0.485 (5)

current versus terminal voltage relationship:

y=271.53ln(x)| 1223.8 (6)

mathematical model T capable of forming transformer type data through formulas 1-6j[l]-∫(Xi) Therefore, a complete parameter model of the mutual inductor can be obtained through the temperature and the current values detected in real time.

TABLE 1

2. The data of current, temperature, operation state classification and the like during normal operation of the common transformer are collected, and a data set of multiple index parameters is obtained by combining a mathematical model. As shown in table 4, the data set is divided into a training set and a test set (gray background data is the training set) by a random sampling method, and weights of each layer of the neural network and connection weights between layers are obtained by a BP algorithm to determine the feedforward neural network. The determined feedforward neural network comprises 10 neurons to form an input layer, 10 neurons to form a hidden layer and 2 outputs to form an output layer, the percentage error rate and the input layer of the previous 10 times in the training process of the neural network are given in table 5, and the weight of the output layer is given in table 6.

TABLE 2

TABLE 3

TABLE 4

TABLE 5 input layer weights

0.14311935 0.10318176 -0.03177137 -0.09643330 0.00450989 -0.03802635 0.11351944 -0.07867491 -0.00936122 0.03335282
0.16324853 0.00187474 -0.08726486 0.10232168 0.04734760 -0.09979746 0.16389850 0.19311419 0.12408689 0.16086638
-0.07464349 0.09193270 0.15953532 0.07359357 -0.01114291 -0.15971952 -0.02633127 0.04435479 0.16520442 0.18664255
0.18159131 0.14612397 -0.09580308 0.12201113 0.01947972 -0.19438332 0.08788187 -0.04047058 0.12993799 0.06726128
-0.19952562 -0.00256885 0.14704111 -0.10243565 -0.06991825 0.14818849 -0.12357316 0.02700430 -0.10455363 0.18701610
0.12138683 -0.02081217 -0.16782167 -0.07197816 0.00317626 0.17313353 -0.15637686 0.02050690 0.08262456 0.01897636
0.12573770 0.01611344 0.18553542 0.04127425 0.03504683 -0.02200439 0.03851474 -0.04603954 0.03026041 -0.08386820
-0.12337706 -0.12530819 0.04510927 0.06266376 -0.00938760 -0.16407026 0.10304157 0.15070815 0.16935241 0.13698409
0.15932320 0.16923298 0.01623997 -0.04348158 0.08211336 -0.08974635 0.12465148 0.13979439 0.15801559 0.03592047
0.17986211 0.03187800 -0.01977476 0.06409815 0.19850314 0.16677649 0.11733003 -0.16705080 0.04511324 -0.00542232
0.05231305 0.13803103 -0.10278575 0.09259569 -0.15314628 -0.11181579 0.11783319 -0.06698554 0.12636524 -0.15975699

3. The network is deployed into an intelligent transformer as an R layer of an ART neural network. Apply intelligent mutual-inductor in actual circuit: the intelligent mutual inductor analyzes acquired current parameters and the temperature of the intelligent mutual inductor as samples by adopting an ART neural network to obtain the working state and the service life aging condition of the mutual inductor at the moment, if an analysis result exists in an original sample library, the acquired parameters and the samples are directly stored in a memory after the iteration of the weight parameters and then output through communication, if the analysis result does not exist in the original sample library, the analysis result is newly added in the sample library, the acquired parameters and the samples are stored in the memory after the iteration of the weight parameters and then output through communication, and the new sample library is used for intelligently analyzing the newly acquired samples. If the analysis result shows that the transformer has faulty operation or short life expectancy, warning or alarm information can be preferentially sent through the communication bus. For example, the average temperature and current at the moment are measured to be [21.0559,0.0133], full-parameter samples [21.0559,0.0133,5.1059,750.7928,0.0679,0.4527, -0.193,3.4095,0.2123, -0.1725] are obtained through a mathematical model, and a result [0,0.9] is given after ART calculation model operation, which indicates that the transformer works normally and predicts 0.9 remaining full life cycle; the average temperature and current at the moment are measured to be [25.3549,0.9736], full parameter samples [25.3549,0.9736,13.5636,1916.5353,13.2055,88.0366,0.0547,0.2646 and 0.21230.1725] are obtained through a mathematical model, a result [2,0.2] is given after ART calculation model operation, the overload work of the transformer is explained, and 0.2 full life cycle is predicted to remain.

TABLE 6 output layer weights

0.06546346 0.05629297
-0.79611583 0.85110899
0.61711617 -0.41885297
1.74530442 -1.33756798

The above-mentioned embodiments are only for convenience of description, and are not intended to limit the present invention in any way, and those skilled in the art will understand that the technical features of the present invention can be modified or changed by other equivalent embodiments without departing from the scope of the present invention.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种新能源汽车电流采样方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!