CNN-based self-supervision voltage sag source identification method

文档序号:1612525 发布日期:2020-01-10 浏览:19次 中文

阅读说明:本技术 基于cnn的自监督电压暂降源辨识方法 (CNN-based self-supervision voltage sag source identification method ) 是由 郑建勇 李丹奇 梅飞 沙浩源 李陶然 于 2019-09-16 设计创作,主要内容包括:本发明公开了一种基于CNN的自监督电压暂降源辨识方法,包括如下步骤:采集电压暂降数据,对所述电压暂降数据进行预处理;在自动编码器的基础上构建卷积编码器与卷积解码器,以建立CNN自监督模型,使所述CNN自监督模型采用卷积层与池化层提取特征,采用BP分类网络进行分类;将预处理后的电压暂降数据划分为训练集和测试集,将训练集批量输入所述CNN自监督模型,以训练所述CNN自监督模型的特征提取能力与分类能力;将所述测试集输入训练后的CNN自监督模型,以对所述测试集进行电压暂降源辨识。采用本方法能够准确辨识电压暂降源。(The invention discloses a CNN-based self-supervision voltage sag source identification method, which comprises the following steps: collecting voltage sag data, and preprocessing the voltage sag data; constructing a convolutional encoder and a convolutional decoder on the basis of an automatic encoder to establish a CNN (convolutional neural network) self-supervision model, so that the CNN self-supervision model adopts convolutional layers and pooling layers to extract features and adopts a BP (back propagation) classification network for classification; dividing the preprocessed voltage sag data into a training set and a test set, inputting the training set into the CNN self-monitoring model in batches to train the feature extraction capability and classification capability of the CNN self-monitoring model; and inputting the test set into a trained CNN (CNN self-supervision) model so as to identify a voltage sag source of the test set. By adopting the method, the voltage sag source can be accurately identified.)

1. A CNN-based self-supervision voltage sag source identification method is characterized by comprising the following steps:

s10, collecting voltage sag data, and preprocessing the voltage sag data; the voltage sag data comprise short-circuit fault data, transformer switching data and motor operation data caused by starting of a motor;

s20, constructing a convolution encoder and a convolution decoder on the basis of an automatic encoder to establish a CNN (convolutional neural network) self-supervision model, so that the CNN self-supervision model adopts a convolutional layer and a pooling layer to extract features, and adopts a BP (back propagation) classification network for classification;

s30, dividing the preprocessed voltage sag data into a training set and a test set, inputting the training set into the CNN self-supervision model in batches to train the feature extraction capability and classification capability of the CNN self-supervision model;

and S40, inputting the test set into the trained CNN self-supervision model to identify the voltage sag source of the test set.

2. The CNN-based self-supervision source of voltage sag identification method according to claim 1, wherein the collecting voltage sag data, the preprocessing of which comprises:

acquiring motor operation data, transformer switching data and short-circuit fault data caused by starting of the motor; the short-circuit fault data comprise three-phase short-circuit data, single-phase grounding data and two-phase interphase short-circuit data;

acquiring sag waveform data corresponding to the motor operation data, the transformer switching data and the short-circuit fault data respectively;

filtering the sag waveform data, extracting a voltage sag domain of the sag waveform data after filtering by adopting wavelet transformation, taking each phase of three-phase voltage of the voltage sag domain as a line, standardizing each sample into a 3 x 3s two-dimensional matrix, standardizing the obtained two-dimensional matrix to enable the value of each element in the two-dimensional matrix to be in an interval [0,1], and repeating each line of data for 3 times to obtain a 9 x 3s sample matrix; s represents the type of data included in the voltage sag data.

3. The CNN-based self-supervision voltage sag source identification method according to claim 2, wherein the single-phase ground data comprises a-phase ground data, B-phase ground data, and C-phase ground data; the two-phase interphase short circuit data includes AB interphase short circuit data, BC interphase short circuit data, and CA interphase short circuit data.

4. The CNN-based method for identifying an unsupervised voltage sag source according to claim 3, wherein the constructing a convolutional encoder and a convolutional decoder based on the auto-encoder to build a CNN unsupervised model, so that the CNN unsupervised model adopts convolutional layers and pooling layers to extract features, and the classifying with a BP classification network comprises:

determining the data dimension of each link in the CNN self-supervision model, and establishing a general framework;

based on CNN and an automatic encoder basic structure, a convolution encoder is established, a convolution layer is constructed, the convolution layer of the convolution layer performs convolution operation on an input sample to extract features, an activation function is adopted to output feature mapping, and an average pooling operator is adopted to scale the feature mapping of the constructed pooling layer to obtain final features;

establishing a BP neural network as a classifier; the classification process of the classifier comprises forward propagation of signals and backward propagation of errors;

adopting a gradient descent algorithm, and taking the error mean square error of the actual output value and the expected output value of the network as a minimum as a target function;

a convolutional decoder is constructed.

5. The CNN-based self-supervision voltage sag source identification method according to any one of claims 2 to 4, wherein the dividing the preprocessed voltage sag data into a training set and a test set, inputting the training set into the CNN self-supervision model in batch to train feature extraction capability and classification capability of the CNN self-supervision model comprises:

dividing all sample matrixes into a training set and a testing set, and inputting the training set into the CNN self-supervision model in batches;

performing convolution operation on the convolution layer and convolution kernels of the convolution layer, obtaining feature mapping after activation of an activation function, compressing the size of the feature mapping through a pooling layer, and compensating an offset center to obtain a final feature;

performing sample reconstruction by adopting a convolutional decoder of a CNN (convolutional neural network) self-supervision model according to the final characteristics to obtain a first reconstructed sample;

continuously updating a convolution kernel in a convolution encoder and each weight in a convolution decoder in an iteration process by using the error between the first reconstruction sample and the training set;

classifying the features extracted from the pooling layer in the last iteration process through a BP network to obtain a weight label corresponding to each sample;

and calculating the weight label and each sag standard waveform in the information base to obtain a second reconstruction sample, and reversely adjusting the weight among each unit in the BP network in an iterative process by using the error between the second reconstruction sample and the training set to finish the training of the CNN self-supervision model.

6. The CNN-based self-supervision voltage sag source identification method according to claim 5, wherein the inputting the test set into a trained CNN self-supervision model to perform voltage sag source identification on the test set comprises:

inputting the test set into a CNN self-supervision model, performing feature extraction and voltage sag source identification on the test set by using the trained model, and verifying the accuracy of the voltage sag source identification.

Technical Field

The invention relates to the field of electric energy quality disturbance source identification, in particular to a self-supervision voltage sag source identification method based on CNN.

Background

With the increasing levels of industrial equipment, building electrical automation and intelligence, the problem of voltage sag is more and more significant for the production and operation of large industrial and commercial users, and particularly, in industries applying a large amount of power electronic equipment such as semiconductor manufacturing, precision instrument processing, automobile manufacturing and the like, the voltage sag is very sensitive, and the power electronic equipment can trip and stop operation when the effective voltage value is lower than 90% and the duration reaches more than 1-2 cycles. Voltage sag is a common power quality problem, voltage sag phenomena are caused by motor starting, transformer switching, short-circuit faults and the like, production interruption and delay caused by voltage sag interference are in an obvious rising trend, direct and indirect economic losses caused by the voltage sag phenomena are serious day by day, and higher requirements are provided for power supply quality. The voltage waveform characteristics caused by different sag sources are different, the sag sources can be accurately identified, the local voltage sag conditions can be analyzed, compensated and restrained in a targeted manner, and meanwhile, the method can be used as a basis for coordination disputes between a power supply department and users and is an essential step in the management of the voltage sag problem.

The voltage sag source identification method is taken as a current research hotspot, and attracts a plurality of scholars at home and abroad to participate in related research. The existing voltage sag source identification method mainly comprises the main steps of information acquisition, feature extraction, sample training and classification identification, researches are mainly carried out on waveform features of voltage sag, a large number of samples are trained to carry out voltage sag source identification by extracting reasonable feature quantities, and numerous achievements are obtained. The basic idea of the algorithm is to convert the sag time domain characteristic into the frequency domain characteristic by using methods such as double wavelet transformation, Prony method, S transformation and the like, extract the characteristic quantity according to artificially set characteristic items, and then identify the sag source by using classification models such as a neural network algorithm, a support vector machine algorithm and the like. The existing voltage sag source identification methods have the following problems: the manual setting of the features needs to be established on the basis of certain understanding of data to be extracted, target features which are expected to be extracted are selected by means of expert experience, then the sag features are extracted in a targeted mode by various means, a large amount of unknown interference exists in actual engineering, the accuracy of feature extraction and identification is affected according to the unchanged expert experience, and the problem of low accuracy of a voltage sag source identification scheme of the traditional scheme is solved.

Disclosure of Invention

Aiming at the problems, the invention provides a self-supervision voltage sag source identification method based on CNN.

In order to achieve the purpose of the invention, the invention provides a CNN-based self-supervision voltage sag source identification method, which comprises the following steps:

s10, collecting voltage sag data, and preprocessing the voltage sag data; the voltage sag data comprise short-circuit fault data, transformer switching data and motor operation data caused by starting of a motor;

s20, constructing a convolution encoder and a convolution decoder on the basis of an automatic encoder to establish a CNN (convolutional neural network) self-supervision model, so that the CNN self-supervision model adopts a convolutional layer and a pooling layer to extract features, and adopts a BP (back propagation) classification network for classification;

s30, dividing the preprocessed voltage sag data into a training set and a test set, inputting the training set into the CNN self-supervision model in batches to train the feature extraction capability and classification capability of the CNN self-supervision model;

and S40, inputting the test set into the trained CNN self-supervision model to identify the voltage sag source of the test set.

In one embodiment, the collecting voltage sag data and the preprocessing the voltage sag data include:

acquiring motor operation data, transformer switching data and short-circuit fault data caused by starting of the motor; the short-circuit fault data comprise three-phase short-circuit data, single-phase grounding data and two-phase interphase short-circuit data;

acquiring sag waveform data corresponding to the motor operation data, the transformer switching data and the short-circuit fault data respectively;

filtering the sag waveform data, extracting a voltage sag domain of the sag waveform data after filtering by adopting wavelet transformation, taking each phase of three-phase voltage of the voltage sag domain as a line, standardizing each sample into a 3 x 3s two-dimensional matrix, standardizing the obtained two-dimensional matrix to enable the value of each element in the two-dimensional matrix to be in an interval [0,1], and repeating each line of data for 3 times to obtain a 9 x 3s sample matrix; s represents the type of data included in the voltage sag data.

As one embodiment, the single-phase ground data includes a-phase ground data, B-phase ground data, and C-phase ground data; the two-phase interphase short circuit data includes AB interphase short circuit data, BC interphase short circuit data, and CA interphase short circuit data.

As an embodiment, the building a convolutional encoder and a convolutional decoder on the basis of an automatic encoder to build a CNN self-supervision model, so that the CNN self-supervision model adopts convolutional layers and pooling layers to extract features, and the classifying by using a BP classification network includes:

determining the data dimension of each link in the CNN self-supervision model, and establishing a general framework;

based on CNN and an automatic encoder basic structure, a convolution encoder is established, a convolution layer is constructed, the convolution layer of the convolution layer performs convolution operation on an input sample to extract features, an activation function is adopted to output feature mapping, and an average pooling operator is adopted to scale the feature mapping of the constructed pooling layer to obtain final features;

establishing a BP neural network as a classifier; the classification process of the classifier comprises forward propagation of signals and backward propagation of errors;

adopting a gradient descent algorithm, and taking the error mean square error of the actual output value and the expected output value of the network as a minimum as a target function;

a convolutional decoder is constructed.

In one embodiment, the dividing the preprocessed voltage sag data into a training set and a test set, and inputting the training set into the CNN self-supervision model in batch to train the feature extraction capability and classification capability of the CNN self-supervision model includes:

dividing all sample matrixes into a training set and a testing set, and inputting the training set into the CNN self-supervision model in batches;

performing convolution operation on the convolution layer and convolution kernels of the convolution layer, obtaining feature mapping after activation of an activation function, compressing the size of the feature mapping through a pooling layer, and compensating an offset center to obtain a final feature;

performing sample reconstruction by adopting a convolutional decoder of a CNN (convolutional neural network) self-supervision model according to the final characteristics to obtain a first reconstructed sample;

continuously updating a convolution kernel in a convolution encoder and each weight in a convolution decoder in an iteration process by using the error between the first reconstruction sample and the training set;

classifying the features extracted from the pooling layer in the last iteration process through a BP network to obtain a weight label corresponding to each sample;

and calculating the weight label and each sag standard waveform in the information base to obtain a second reconstruction sample, and reversely adjusting the weight among each unit in the BP network in an iterative process by using the error between the second reconstruction sample and the training set to finish the training of the CNN self-supervision model.

As an embodiment, the inputting the test set into the trained CNN self-supervision model to perform voltage sag source identification on the test set includes:

inputting the test set into a CNN self-supervision model, performing feature extraction and voltage sag source identification on the test set by using the trained model, and verifying the accuracy of the voltage sag source identification

According to the CNN-based self-supervision voltage sag source identification method, voltage sag data are collected and preprocessed, a convolutional encoder and a convolutional decoder are built on the basis of an automatic encoder to build a CNN self-supervision model, so that the CNN self-supervision model adopts a convolutional layer and a pooling layer to extract features and adopts a BP classification network to classify, and thus a training set is input into the CNN self-supervision model in batches, and the feature extraction capability and the classification capability of the CNN self-supervision model can be trained; the test set is input into the trained CNN self-monitoring model, so that the voltage sag source identification can be carried out on the test set, the voltage sag source can be accurately identified, a large number of training sets and correct labels do not need to be input in advance in the whole training process, the problem that the unknown sag waveform cannot be correctly identified when being monitored in the traditional method is solved, and preconditions are provided for network learning and identification.

Drawings

FIG. 1 is a flow diagram of an embodiment of a CNN-based method for identifying a source of a voltage sag under self-supervision;

FIG. 2 is a flow diagram of another embodiment of a CNN-based method for identifying a source of a voltage sag under supervision;

FIG. 3 is an exemplary waveform diagram of various sag types for one embodiment;

FIG. 4 is a voltage sag category gray scale diagram of an embodiment;

FIG. 5 is a schematic diagram of an autoencoder model of an embodiment;

FIG. 6 is an auto-supervised CNN model structural framework of an embodiment;

FIG. 7 is a schematic diagram of convolution kernel graying according to one embodiment;

FIG. 8 is a sample diagram of a voltage sag caused by phase B ground according to one embodiment;

FIG. 9 is a feature mapping diagram of an embodiment

FIG. 10 is a schematic diagram of the final features after average pooling of one embodiment.

Detailed Description

In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.

Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.

Referring to fig. 1, fig. 1 is a flowchart of an embodiment of a CNN-based method for identifying an unsupervised voltage sag source, including the following steps:

s10, collecting voltage sag data, and preprocessing the voltage sag data; the voltage sag data comprise short-circuit fault data, transformer switching data and motor operation data caused by starting of the motor.

The voltage sag data includes voltage sag data caused by various short circuit faults, transformer switching, and motor starting. The preprocessing process of the voltage sag data may include: and filtering and denoising the sag waveform corresponding to the voltage sag data, extracting a depression domain, and performing standardized calculation to unify the time sequence length to realize the pretreatment of the voltage sag data.

S20, building a convolution encoder and a convolution decoder on the basis of the automatic encoder to build a CNN self-supervision model, so that the CNN self-supervision model adopts a convolution layer and a pooling layer to extract features, and adopts a BP classification network to classify.

The steps are that a convolutional encoder and a convolutional decoder are built on the basis of an automatic encoder to build a CNN self-supervision model, the built CNN self-supervision model can adopt convolutional layers and pooling layers to extract features, and a BP classification network is adopted for classification. The CNN auto-supervised model may also modify the weights by reconstructing errors between the samples and input samples (e.g. the input training set).

And S30, dividing the preprocessed voltage sag data into a training set and a test set, and inputting the training set into the CNN self-supervision model in batches to train the feature extraction capability and classification capability of the CNN self-supervision model.

The steps can realize the training of the CNN self-supervision model, so that the trained CNN self-supervision model can be directly used for identifying the voltage sag source.

And S40, inputting the test set into the trained CNN self-supervision model to identify the voltage sag source of the test set.

The steps can quickly and accurately identify the voltage sag source of the test set, and can also verify the identification accuracy rate so as to further verify and optimize the CNN self-supervision model.

According to the CNN-based self-supervision voltage sag source identification method, voltage sag data are collected and preprocessed, a convolutional encoder and a convolutional decoder are built on the basis of an automatic encoder to build a CNN self-supervision model, so that the CNN self-supervision model adopts a convolutional layer and a pooling layer to extract features and adopts a BP classification network to classify, and thus a training set is input into the CNN self-supervision model in batches, and the feature extraction capability and the classification capability of the CNN self-supervision model can be trained; the test set is input into the trained CNN self-monitoring model, so that the voltage sag source identification can be carried out on the test set, the voltage sag source can be accurately identified, a large number of training sets and correct labels do not need to be input in advance in the whole training process, the problem that the unknown sag waveform cannot be correctly identified when being monitored in the traditional method is solved, and preconditions are provided for network learning and identification.

In a specific example, the flow chart of the CNN-based method for identifying an unsupervised voltage sag source may also refer to fig. 2, where the method includes:

step one, data preprocessing: voltage sag data caused by various short circuit faults, transformer switching and motor starting are collected. And filtering and denoising the sag waveform to be matched, extracting a notch domain, unifying the time sequence length, and performing standardized calculation.

Step two, establishing a CNN self-supervision model: a convolutional encoder and a convolutional decoder are constructed on the basis of an automatic encoder, features are extracted by adopting a convolutional layer and a pooling layer, and classification is carried out by adopting a BP classification network. The corresponding model corrects the weight by the error of the reconstructed sample and the input sample.

Step three, training a CNN self-supervision model: dividing all sag data into a training set and a test set, inputting the training set data into a model in batch, and training the feature extraction capability and classification capability of the CNN.

Step four, identifying the type of the sag source: inputting the test set data into a CNN self-supervision model, and performing feature extraction and voltage sag source identification on the test set data by using the trained model.

In an embodiment, the collecting voltage sag data and preprocessing the voltage sag data includes:

acquiring motor operation data, transformer switching data and short-circuit fault data caused by starting of the motor; the short-circuit fault data comprise three-phase short-circuit data, single-phase grounding data and two-phase interphase short-circuit data;

acquiring sag waveform data corresponding to the motor operation data, the transformer switching data and the short-circuit fault data respectively;

filtering the sag waveform data, extracting a voltage sag domain of the sag waveform data after filtering by adopting wavelet transformation, taking each phase of three-phase voltage of the voltage sag domain as a line, standardizing each sample into a 3 x 3s two-dimensional matrix, standardizing the obtained two-dimensional matrix to enable the value of each element in the two-dimensional matrix to be in an interval [0,1], and repeating each line of data for 3 times to obtain a 9 x 3s sample matrix; s represents a data type included in the voltage sag data, and the value of s may be 9.

As an embodiment, the single-phase ground data includes a-phase ground data, B-phase ground data, and C-phase ground data; the two-phase interphase short circuit data includes AB interphase short circuit data, BC interphase short circuit data, and CA interphase short circuit data.

According to the embodiment, voltage sag time domain monitoring signals caused by different types of short-circuit faults such as single-phase grounding, two-phase short circuit, three-phase short circuit and the like in the processes of motor starting, transformer switching and short-circuit fault can be obtained, so that voltage sag data can be obtained. According to different sag sources, the voltage sag data can be divided into: the types of the motor starting data, the transformer switching data, the three-phase short circuit data, the single-phase grounding data (including A-phase grounding, B-phase grounding and C-phase grounding) and the two-phase interphase short circuit data (including AB interphase short circuit, BC interphase short circuit and CA interphase short circuit) are 9 in total.

Further, after obtaining the corresponding sag waveform data, filtering and denoising the obtained sag waveform data, extracting a voltage sag depression domain by using wavelet transformation, taking each phase of three-phase voltage as a row, normalizing each sample into a 3 × 27 two-dimensional matrix, standardizing the value of each element in [0,1], repeating each row of data for 3 times to obtain a 9 × 27 sample matrix, and implementing preprocessing of the voltage sag data. The sample matrix is the result of the voltage sag data after preprocessing.

In one example, 9 types of voltage sag data may be respectively denoted as S1, S2, S3, S4, S5, S6, S7, S8, and S9, and typical waveforms corresponding to the voltage sag data may be as shown in fig. 3. And filtering and denoising the obtained sag waveform data, and extracting a voltage sag depression domain by using wavelet transformation. Each phase of the three-phase voltage is taken as a row, and each sample is normalized to a two-dimensional matrix of 3 x 27, with each element in [0,1 ]. Each row of data was repeated 3 times, resulting in a 9 x 27 sample matrix. The purpose of obtaining the 9 × 27 sample matrix is to perform convolution operations on the convolution kernel and the input samples (such as a training set or a test set) more times, so as to sufficiently extract the features of the samples.

Further, the corresponding samples may be grayed for observation, and fig. 4 is a typical voltage sag type after the graying.

As an embodiment, the above building a convolutional encoder and a convolutional decoder on the basis of an automatic encoder to build a CNN self-supervision model, so that the CNN self-supervision model adopts convolutional layers and pooling layers to extract features, and the classifying by using a BP classification network includes:

determining the data dimension of each link in the CNN self-supervision model, and establishing a general framework;

based on CNN and an automatic encoder basic structure, a convolution encoder is established, a convolution layer is constructed, the convolution layer of the convolution layer performs convolution operation on an input sample to extract features, an activation function is adopted to output feature mapping, and an average pooling operator is adopted to scale the feature mapping of the constructed pooling layer to obtain final features;

establishing a BP neural network as a classifier; the classification process of the classifier comprises forward propagation of signals and backward propagation of errors;

adopting a gradient descent algorithm, and taking the error mean square error of the actual output value and the expected output value of the network as a minimum as a target function;

a convolutional decoder is constructed.

The embodiment can determine the data dimension of each link in the CNN self-supervision model according to the actual requirements (such as sample size, convolution kernel structure, selected pooling algorithm, classification network structure, category total number, effect of practical application examples, and the like) to establish the overall framework of the CNN self-supervision model. The convolutional decoder of the CNN auto-supervised model may perform sample reconstruction using the final features obtained by scaling the feature mapping to obtain corresponding reconstructed samples, and then guide the weight correction process of the convolutional kernel in the convolutional encoder and the convolutional decoder using the error between the reconstructed samples and the original samples (e.g., the input training set).

In one example, the convolution kernel is a two-dimensional matrix, and the process is similar to the weight updating process of a common neural network as the iteration number is continuously updated in the training process. If the input sample size and convolution kernel size are known, the feature size can be represented by equation (1).

Figure BDA0002202404250000071

Wherein S isi×jRepresenting input samples as a matrix of size i x j, Kn×nRepresenting a convolution kernel size of n × n and a feature size ofh denotes the step size of the convolution kernel sliding over the samples. If falseThe fixed sliding step is always 1, and the convolution operation process can be expressed by the following formula (2):

Figure BDA0002202404250000082

wherein S is an input sample; kn×nIs a convolution kernel; (ii) a F (n, m) is the element value of the nth row and mth column of the feature map.

The present example employs a sigmod function as the activation function of the CNN, the functional expression being shown in equation (3).

Figure BDA0002202404250000083

And constructing a pooling layer, and scaling the feature mapping by adopting an average pooling operator. And (4) outputting the average value of the elements in the sensing domain by means of average value pooling, wherein the average value pooling is expressed by formula (4):

Figure BDA0002202404250000084

wherein a (u, v) represents the value of the v column in the u row in the input matrix of the pooling layer (where the input matrix represents the result of the feature mapping after activation by the activation function); p (i, j) represents the ith row and jth column values in the output matrix of the pooling layer; w represents the boundary value of the participating pooling region.

Further, this example may also adjust the neuron weights using an error training AE of the original input and the reconstructed input, in multiple iterations with a loss function minimization criterion characterized by equation (5):

Figure BDA0002202404250000085

where X is the input signal (final characteristic); w is a code; u is a decoding weight; phi is a nonlinear activation function; j is a neuron weight adjustment function; r is a regularization function; λ is the regularization term coefficient.

According to the example, the convolution kernel in the encoder and the weight in the decoder are updated in each iteration according to the error magnitude and the error change trend, until the error is smaller than the threshold set by me, the good solution is reached, the encoder and the decoder are optimal at the moment, and the iteration is finished.

Specifically, the above-described automatic encoder model may be described with reference to fig. 5.

In an embodiment, the dividing the preprocessed voltage sag data into a training set and a test set, and inputting the training set into the CNN self-monitoring model in batch to train the feature extraction capability and classification capability of the CNN self-monitoring model includes:

dividing all sample matrixes into a training set and a testing set, and inputting the training set into the CNN self-supervision model in batches;

performing convolution operation on a training set and convolution kernels of the convolutional layer in the convolutional layer, activating by an activation function (such as a sigmod function) to obtain feature mapping, compressing the size of the feature mapping through a pooling layer, and compensating an offset center to obtain a final feature;

performing sample reconstruction by adopting a convolutional decoder of a CNN (convolutional neural network) self-supervision model according to the final characteristics to obtain a first reconstructed sample;

continuously updating a convolution kernel in a convolution encoder and each weight in a convolution decoder in an iteration process by using the error between the first reconstruction sample and the training set;

classifying the features extracted from the pooling layer in the last iteration process through a BP network to obtain a weight label corresponding to each sample;

and calculating the weight label and each sag standard waveform in the information base to obtain a second reconstruction sample, and reversely adjusting the weight among each unit in the BP network in an iterative process by using the error between the second reconstruction sample and the training set to finish the training of the CNN self-supervision model.

In one example, in the process of training the CNN self-supervision model, a feature map is obtained after a convolution operation is performed on a corresponding sample (such as a training set) and a convolution kernel of a convolutional layer and the convolution kernel can be activated by an activation function sigmod function; the feature mapping is to compensate for the off-center recessed region in the voltage sag data by reducing the size of the feature to obtain a corresponding final feature; the convolution decoder reconstructs a sample according to the final characteristics, and continuously updates a convolution kernel in a convolution encoder and each weight in the convolution decoder in an iteration process by using the error between the reconstructed sample and the original input (such as a training set), so as to finally obtain good characteristics capable of accurately reflecting the self characteristics of the voltage sag waveform;

further, the final features can be input into a classification network to obtain an initial classification label of 1 × N, where N is the number of output layer units of the BP neural network; the number of output layer units of the BP neural network depends on the total number of classes of expected results of the classification (the label is of no practical significance without supervised iteration).

Fitting standard sample waveforms S of various sag according to various voltage sag actual waveformsn(namely, the sag standard waveform, for example, typical sag waveform data in real-time monitoring data of voltage sag of a certain city can be classified, various sag types are normally fitted according to the types, the waveform obtained by fitting is used as a standard sample waveform of each voltage sag type), and each element W in the label is usednConsider as each standard sample waveform SnThe corresponding weight, the method for constructing the reconstructed sample is to accumulate each standard sample under the corresponding weight, as shown in formula (6):

Figure BDA0002202404250000101

wherein. RSn'To reconstruct the sample waveform, Wn'For each element in the tag matrix, Sn'For the standard sample waveform, s is the total number of voltage sag source classes, and n' is the current sag class.

After the corresponding reconstructed sample is obtained, the classification network is trained in the back propagation process according to the reconstruction error between the original input sample (such as a training set) and the corresponding reconstructed sample (such as a second reconstructed sample), and after multiple iterations, the mean square error of the error between the corresponding reconstructed sample and the original sample is minimized. This translates the voltage sag source identification problem into: the problem of finding the optimal weight under the condition that the waveform to be identified is represented by a fixed set of standard waveforms. And monitoring the back propagation training process of the network by the reconstruction error, and updating the reconstruction sample in each iteration to gradually approximate to the original input sample.

As an embodiment, the inputting the test set into the trained CNN self-supervision model to perform voltage sag source identification on the test set includes:

inputting the test set into a CNN self-supervision model, performing feature extraction and voltage sag source identification on the test set by using the trained model, and verifying the accuracy of the voltage sag source identification.

In this embodiment, the test set is input into the CNN self-supervision model, and feature extraction and voltage sag source identification are performed on the test set by using the trained CNN self-supervision model. The sag type corresponding to the largest element in the classification label matrix is the classification network judgment result, so that the identification of the voltage sag source and the verification of the corresponding accuracy are realized.

The self-supervision voltage sag source identification method based on the CNN is characterized in that a self-supervision CNN voltage sag source identification model is constructed on a basis structure of a Convolutional Neural Network (CNN) and an automatic encoder, voltage sag characteristics are self-extracted by using a convolutional layer and a pooling layer in the CNN, and artificial characteristics are replaced by characteristics reflecting characteristics of data. The three-phase asymmetric sag sources are divided into more detailed classes according to different fault phases, so that the voltage sag sources can be identified, and the fault phases can be accurately judged. The network training process is self-supervised based on the principle of an automatic encoder, and a large number of training sets and correct labels do not need to be input in advance in the whole training process, so that the method is more suitable for actual engineering. Finally, the optimal parameters of the model are selected in the calculation example, a self-supervision CNN model suitable for the voltage sag actual measurement data type is established, the actual measurement sag data is subjected to feature extraction and sag source identification, and the superiority of the method in sag source identification is verified through comparison.

In one embodiment, simulation analysis is performed on the CNN-based self-supervision voltage sag source identification method. The parameters of the CNN self-supervision model in the embodiment are selected according to the optimal parameter simulation experiment result, and the specific parameters are as follows:

the present embodiment builds an auto-supervised CNN model based on the structure of the classical 3-layer CNN model, because the input is a 9 × 27 two-dimensional matrix, and thus the input layer contains 9 × 27 cells. The feature extraction CNN network comprises 1 convolutional layer and 1 pooling layer: the convolutional layer consists of 16 convolutional kernels with the length of 3 multiplied by 3, and the output of the convolutional layer enters the pooling layer after being activated by a sigmod function; the pooling layer uses an average pooling of 1 × 5 sub-matrices to output 16 final 7 × 5 feature matrices. The output of the feature extraction network is converted into a 1 x 560 one-dimensional matrix which is input into a transition layer as an input layer of the classification network, and the transition layer does not contain any other operation. The classification network of the model comprises a hidden layer and an output layer. The hidden layer is provided with 100 nodes, and a sigmod activating function is adopted; the number of output layer units is the number of voltage sag sources, and the present example divides the voltage sag sources into 9 types according to the fault phase, so that the output layer includes 9 units. The self-supervision CNN model established in the present example is a 6-layer model, the structural framework is shown in FIG. 6, and the momentum parameters of the convolution feature extraction network and the classification network are both 1. The calculation example is as follows: the maximum iteration number is 75, and the CNN feature extraction network learning rate and the BP classification network learning rate are obtained.

The 360 voltage dip actual measurement waveform samples (including 40 motor starting voltage dips, 40 transformer switching voltage dips, 40 three-phase short-circuit voltage dips, 40 a-phase ground voltage dips, 40B-phase ground voltage dips, 40C-phase ground voltage dips, 40 AB-phase short-circuit voltage dips, 40 BC-phase short-circuit voltage dips, and 40 CA-phase short-circuit voltage dips) are preprocessed and input into the CNN model for training, as shown in the flow chart of fig. 6. After the convolution kernel self-supervision training is finished, the characteristics capable of reflecting the essence of the sample waveform can be extracted. In the present embodiment, the output of the CNN feature extraction network is used as the final feature for identifying the voltage sag source, and 16 convolution kernels in the model layer 2 and convolution layer are grayed, as shown in fig. 7.

The method comprises the steps of preprocessing 360 voltage dip actual measurement waveform samples (comprising 40 motor starting voltage dips, 40 transformer switching voltage dips, 40 three-phase short-circuit voltage dips, 40A-phase ground voltage dips, 40B-phase ground voltage dips, 40C-phase ground voltage dips, 40 AB-phase short-circuit voltage dips, 40 BC-phase short-circuit voltage dips and 40 CA-phase short-circuit voltage dips) and inputting the preprocessed samples into a CNN model for training. After the convolution kernel self-supervision training is finished, the characteristics capable of reflecting the essence of the sample waveform can be extracted. In this example, 16 convolution kernels in the model layer 2, convolution layer were grayed out with the output of the CNN feature extraction network as the final feature for voltage sag source identification, as shown in fig. 7. Taking the example of inputting a sample of voltage sag caused by grounding phase B, as shown in fig. 8, the state and output of each layer can be checked. The sample and convolution kernel of the convolution layer are subjected to convolution operation and activated by an activation function sigmod to obtain feature mapping, that is, 16 features extracted from the input sample by the self-supervision CNN model in this embodiment are shown in fig. 9. Feature mapping across pooling layers to compress size and compensate for offset centers, the model uses average pooling to obtain the final output features as shown in FIG. 10. It can be seen that the pooling layer compresses the features of the B-phase grounded samples from 7 × 25 to 7 × 5, with the key information part in the feature shifted from the left image center.

After training of the classification network is completed on the 360 voltage sag actually measured waveform data, 360 weight labels corresponding to the voltage sag actually measured waveform data are obtained, and the average weight labels of various sag are shown in table 1.

Table 1: average classification label for various types of voltage sag sources

Figure BDA0002202404250000121

As can be derived from equation (6), the sag type corresponding to the largest element in the classification label matrix is the classification network determination result, for example, the motor starting voltage sag corresponds to the standard sample S1, so W1 in the weight label is the maximum value, and the calculation results in table 1 can also be verified. And judging a final identification result by taking the maximum element in the classification label as a basis.

100 test samples (including 15 motor starting voltage dips, 15 transformer switching voltage dips, 10 three-phase short-circuit voltage dips, 10 a-phase ground voltage dips, 10B-phase ground voltage dips, 10C-phase ground voltage dips, 10 AB-phase short-circuit voltage dips, 10 BC-phase short-circuit voltage dips, and 10 CA-phase short-circuit voltage dips) in the test set are input into the model, and the accuracy of the voltage dip source identification method based on the self-supervision CNN model is shown in table 2. According to experimental results, the accuracy of the sag source identification method for extracting the voltage sag waveform characteristics by using the convolution layer and classifying by using the BP neural network is 97%, wherein the identification accuracy of the three-phase short-circuit fault, the single-phase ground fault and the two-phase short-circuit fault can reach 100%, and the identification accuracy can reach 100% in the fault phase judgment of the single-phase ground fault and the two-phase short-circuit fault.

Table 2: CNN-based self-supervision voltage sag source identification method accuracy

Figure BDA0002202404250000122

Figure BDA0002202404250000131

To verify the superiority of the method in feature extraction and classification, the present example is compared with the conventional method based on S transformation and support vector machine. The conventional method cannot distinguish the fault phase, so the comparison only relates to the accuracy of the sag type identification, and the results are shown in table 3. The method in the document [ Support Vector Machine for Classification of voltagediturbans (Axelberg, p.g.v.; Gu, i.y.; Bollen, m.h.j.) ] uses artificial extraction features, the accuracy of the temporary drop source identification method for SVM Classification is 83%, the Classification accuracy of each class using the self-monitoring CNN model proposed herein can reach 97%, and the accuracy is far higher than that of the first method, and the fault phase can be accurately identified. The step of the SVM method that takes the most time is feature extraction. According to expert experience, the target characteristics are set: and inputting the extracted features into the SVM for training. Comparing the two methods, the method proposed herein does not require training the labels of the samples, nor does it require separate feature extraction. Feature extraction is performed simultaneously with the training of the CNN, so the time for the training machine steps is much longer than the time for training the SVM. From the results, the method provided by the embodiment has higher accuracy.

Table 3: method for identifying different voltage sag sources with correct rate

Figure BDA0002202404250000132

In summary, through analysis of the principle of the self-supervision CNN model and the results of practical examples, it can be known that the superiority of the self-supervision voltage sag source identification method based on CNN is mainly reflected in:

1) on the basis of the structure of a convolutional neural network, voltage sag feature self-extraction is carried out by utilizing convolution operation of a convolutional layer and pooling action of a pooling layer, manual set feature extraction is changed into automatic generation feature extraction, and the problems that manual extraction of features excessively depends on expert experience, is greatly influenced by unknown features and has no generality are solved. The method can automatically extract features from a large amount of data, and reduces the recognition error rate caused by improper feature extraction in the traditional method.

2) Three-phase waveform samples in the traditional method are converted into one-dimensional matrixes to be input into a classification model, two-dimensional matrixes are input between the three-dimensional matrixes, information of fault phases is reserved, sag types are expanded to 9 types, and fault phase identification in three-phase asymmetric faults is achieved.

3) The network training process is self-supervised based on the AE principle, a large number of training sets and correct labels do not need to be input in advance in the whole training process, the problem that the unknown sag waveforms cannot be correctly identified in the traditional method is solved, and the method is more suitable for sag source identification requirements of timeliness, practicability, diversity and universality under the modern big data background.

4) Compared with an SVM method and a DBN method, the accuracy of the CNN-based self-supervision voltage sag source identification method in identification of actually-measured sag data is up to 97%, and the accuracy is higher. The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.

Therefore, the self-supervision voltage sag source identification method based on the CNN provides a new voltage sag source identification method on the basis of the basic structures of the CNN and the automatic encoder, voltage sag characteristics are self-extracted by utilizing a convolution layer and a pooling layer in the CNN, artificial characteristics are replaced by characteristics reflecting the characteristics of data, and the problem that the original method excessively depends on expert experience is solved. The three-phase asymmetric sag sources are divided into more detailed categories according to different fault phases, the information of the fault phase category in the input sample is reserved, the voltage sag sources can be identified, and the fault phases can be accurately judged. The network training process is self-supervised based on the AE principle, a large number of training sets and correct labels are not required to be input in advance in the whole training process, the problem that the unknown sag waveform cannot be correctly identified when being monitored in the traditional method is solved, preconditions are provided for network learning and identification, and the method is more suitable for actual engineering.

It should be noted that the terms "first \ second \ third" referred to in the embodiments of the present application merely distinguish similar objects, and do not represent a specific ordering for the objects, and it should be understood that "first \ second \ third" may exchange a specific order or sequence when allowed. It should be understood that "first \ second \ third" distinct objects may be interchanged under appropriate circumstances such that the embodiments of the application described herein may be implemented in an order other than those illustrated or described herein.

The terms "comprising" and "having" and any variations thereof in the embodiments of the present application are intended to cover non-exclusive inclusions. For example, a process, method, apparatus, product, or device that comprises a list of steps or modules is not limited to the listed steps or modules but may alternatively include other steps or modules not listed or inherent to such process, method, product, or device.

The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种用于测量通电交流直导线的差分电流传感器

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!