Abnormal electroencephalogram signal detection method based on Rinyi phase transfer entropy and lightweight convolutional neural network

文档序号:1896201 发布日期:2021-11-30 浏览:14次 中文

阅读说明:本技术 一种基于Rényi相位传递熵和轻量级卷积神经网络的异常脑电信号检测方法 (Abnormal electroencephalogram signal detection method based on Rinyi phase transfer entropy and lightweight convolutional neural network ) 是由 张小凤 张光斌 方冲 牟燕平 于 2021-06-18 设计创作,主要内容包括:本发明提供了一种基于Rényi相位传递熵和轻量级卷积神经网络的异常脑电信号检测方法,包括步骤1、获取EEG数据;步骤2、对获取的EEG数据进行Rényi相位传递熵处理,得到脑功能网络的相关矩阵C;步骤3、选择阈值,二值化相关矩阵C,得到脑功能网络的邻接矩阵A;步骤4、将邻接矩阵A输入到轻量级卷积神经网络模型,该基于Rényi相位传递熵和轻量级卷积神经网络的异常脑电信号检测方法,对脑功能网络作特征提取和癫痫识别任务中分类在准确率,灵敏度,特异性三个方面具有更好的效果,在大幅度降低模型的参数量和计算量情况下,可以保证模型对于脑病识别准确性。(The invention provides an abnormal electroencephalogram signal detection method based on a Reen yi phase transfer entropy and a lightweight convolutional neural network, which comprises the following steps of 1, acquiring EEG data; step 2, carrying out Rnyi phase transfer entropy processing on the acquired EEG data to obtain a correlation matrix C of the brain function network; step 3, selecting a threshold value, and binarizing the correlation matrix C to obtain an adjacent matrix A of the brain function network; and 4, inputting the adjacency matrix A into a lightweight convolutional neural network model, and classifying the abnormal electroencephalogram signal based on the Rinyi phase transfer entropy and the lightweight convolutional neural network in the tasks of extracting the characteristics of the brain function network and identifying the epilepsy according to the abnormal electroencephalogram signal detection method, wherein the method has better effects in three aspects of accuracy, sensitivity and specificity, and can ensure the accuracy of the model in identifying the encephalopathy under the condition of greatly reducing the parameters and the calculated amount of the model.)

1. An abnormal electroencephalogram signal detection method based on a Renyi phase transfer entropy and a lightweight convolutional neural network is characterized by comprising the following steps: the method comprises the following steps:

step 1, obtaining EEG data;

step 2, carrying out Rnyi phase transfer entropy processing on the acquired EEG data to obtain a correlation matrix C of the brain function network;

step 3, selecting a threshold value, and binarizing the correlation matrix C to obtain an adjacent matrix A of the brain function network;

and 4, inputting the adjacency matrix A into a lightweight convolutional neural network model, and performing feature extraction and classification in an epilepsy recognition task on the brain function network.

2. The abnormal electroencephalogram signal detection method based on the Rnyi phase transfer entropy and the lightweight convolutional neural network, which is described in claim 1, is characterized in that: the definition of the renyi phase transfer entropy in the step 2 is as follows:

where q denotes the renyi entropy parameter and θ (t) is the phase sequence of the analytic signal s (t).

3. The method for detecting the abnormal electroencephalogram signal based on the renyi phase transfer entropy and the lightweight convolutional neural network as claimed in claim 1, characterized in that: the threshold value should be selected to satisfy the following condition:

(1) all nodes in the brain function network are connected, and no independent node exists;

(2) ensuring that the value of the average degree of the brain network meets the condition that the average degree is more than 2lnN, wherein N is the number of nodes in the network;

(3) and ensuring that the functional brain network represented by the adjacency matrix A has the property of small worlds.

4. The method for detecting the abnormal electroencephalogram signal based on the renyi phase transfer entropy and the lightweight convolutional neural network as claimed in claim 1, characterized in that: the lightweight convolutional neural network model comprises a first classification layer, a second classification layer, a third classification layer and a fourth classification layer.

5. The method for detecting the abnormal electroencephalogram signal based on the renyi phase transfer entropy and the lightweight convolutional neural network as claimed in claim 1, characterized in that: the first classification layer comprises two convolution layers and a pooling layer.

6. The method for detecting the abnormal electroencephalogram signal based on the renyi phase transfer entropy and the lightweight convolutional neural network as claimed in claim 5, characterized in that: the formula of the convolution operation is as follows:

whereinThe ith characteristic diagram, the (l +1) th characteristic diagram, representing the l-th layer of the CNNThe ith feature map of the ith layer can be passed through convolution operationObtained by addition of convolution sums and offsets, bjIs the bias term, kijRepresents the jth convolution kernel of the ith channel, "' in the formula represents convolution operation, and f is the activation function of CNN.

7. The method for detecting the abnormal electroencephalogram signal based on the renyi phase transfer entropy and the lightweight convolutional neural network as claimed in claim 1, characterized in that: the second classification layer and the third classification layer respectively comprise a depth separable convolution layer and a pooling layer.

8. The method for detecting the abnormal electroencephalogram signal based on the renyi phase transfer entropy and the lightweight convolutional neural network as claimed in claim 7, characterized in that: the depth-separable convolution of the depth-separable convolution layer is preceded by a channel convolution, as in equation (13) (whereRepresenting multiplication of corresponding elements), performing point-by-point convolution as shown in formula (14), and finally bringing formula (13) into formula (14) to obtain depth separable convolution as shown in formula (15);

SepConv(Wp,Wd,y)(i,j)=PointwiseConv(i,j)(Wp,DepthwiseConv(i,j)(Wd,y)) (15)

wherein W is a convolution kernel, y is an input feature map, i, j is an input feature map resolution, k, l is an output feature map resolution, and m is the number of channels.

9. The method for detecting the abnormal electroencephalogram signal based on the renyi phase transfer entropy and the lightweight convolutional neural network as claimed in claim 1, characterized in that: the fourth classification layer comprises three full connection layers.

10. The method for detecting the abnormal electroencephalogram signal based on the renyi phase transfer entropy and the lightweight convolutional neural network as claimed in claim 1, characterized in that: the definition formula of the full connection layer is as follows:

ai=Wi1*x1+Wi2*x2+…+Win*xn+bi (22)

wherein x isiIs an input of a full connection layer, aiTo output, WijAs weight parameter, biIs a bias parameter.

Technical Field

The invention belongs to the technical field of electroencephalogram signal detection, and particularly relates to an abnormal electroencephalogram signal detection method based on a renyi phase transfer entropy and a lightweight convolutional neural network.

Background

A seizure is a transient abnormality of electrical activity in the brain. People with epilepsy, i.e. with central nervous system disorders, will have recurrent seizures at unpredictable times and often without any warning. Seizures can result in inattention or generalized convulsions. Frequent seizures increase the risk of physical injury to the individual and may even lead to death.

Because seizures are related to electroencephalogram activity, EEG signals play an important role in the diagnosis of epilepsy and the assessment of pre-operative epileptogenic zones. Electroencephalographs measure the electrical activity of the brain by means of electrodes which are arranged uniformly on the scalp. The electroencephalogram channel is formed by the difference in potentials measured by two electrodes and captures the total potential of millions of neurons. To study the coupling between two brain regions, a common approach is to analyze the EEG signal characteristics acquired by electrodes located in the brain regions.

Research shows that the brain is a complex network formed by different brain areas, the interaction and the coordination of the multiple brain areas are required for the exertion of the functions of the brain, and the epilepsy can be spread among the different brain areas in a network mode. Therefore, it is highly desirable to study epileptic seizures based on EEG signals from the perspective of the brain functional network.

In order to analyze the effectiveness of FBN in seizures, a method for constructing a functional brain network by means of renyi phase transfer entropy was proposed, investigating the correlation of EEG signals from different brain regions.

CNNs were originally invented and improved by the Yann LeCun research team and were designed primarily to solve the image classification problem. The results of the layers of the CNN can represent higher-order features in the image, of which it is the convolutional layer in the CNN that plays an important role. The convolutional layer is composed of a plurality of one-dimensional or multi-dimensional filters, and the parameters of these operators are set empirically by researchers. In contrast, the filter of the convolutional layer may implement self-learning of parameters according to the label of the sample. With the development of CNN, in addition to the conventional convolution kernel, various types of convolution kernels such as transposed convolution, hole convolution, block convolution, and the like are successively proposed. The theory of the convolution process is continuously rich, and the application range of the CNN is continuously expanded.

Disclosure of Invention

Aiming at the problems in the prior art, the invention provides an abnormal electroencephalogram signal detection method based on a renyi phase transfer entropy and a lightweight convolutional neural network, which comprises the following steps:

step 1, obtaining EEG data;

step 2, carrying out Rnyi phase transfer entropy processing on the acquired EEG data to obtain a correlation matrix C of the brain function network;

step 3, selecting a threshold value, and binarizing the correlation matrix C to obtain an adjacent matrix A of the brain function network;

and 4, inputting the adjacency matrix A into a lightweight convolutional neural network model, and performing feature extraction and classification in an epilepsy recognition task on the brain function network.

Further, the renyi phase transfer entropy in step 2 is defined as:

RPTEX→Y= Hqy(t)y(t'))+Hqy(t')x(t'))- Hqy(t'))-Hqy(t)y(t')x(t')) (4)

where q denotes the renyi entropy parameter and θ (t) is the phase sequence of the analytic signal s (t).

Further, the threshold should be selected to satisfy the following condition:

(1) all nodes in the brain function network are connected, and no independent node exists;

(2) ensuring that the value of the average degree of the brain network meets the condition that the average degree is more than 2lnN, wherein N is the number of nodes in the network;

(3) and ensuring that the functional brain network represented by the adjacency matrix A has the property of small worlds.

Further, the lightweight convolutional neural network model comprises a first classification layer, a second classification layer, a third classification layer and a fourth classification layer.

Further, the first classification layer comprises two convolution layers and a pooling layer.

Further, the formula of the convolution operation is as follows:

wherein Fi (l)The ith feature map, the (l +1) th feature map F, representing the l-th layer of the CNNi (l+1)The ith feature map F of the ith layer can be passed through convolution operationi (l)Obtained by addition of convolution sums and offsets, bjIs the bias term, kijRepresents the jth convolution kernel of the ith channel, "' in the formula represents convolution operation, and f is the activation function of CNN.

Further, the second classification layer and the third classification layer both comprise a depth separable convolution layer and a pooling layer.

Further, the depth-separable convolution of the depth-separable convolution layer is preceded by a channel convolution, as in equation (13) (whereRepresenting multiplication of corresponding elements), performing point-by-point convolution as shown in formula (14), and finally bringing formula (13) into formula (14) to obtain depth separable convolution as shown in formula (15);

SepConv(Wp,Wd,y)(i,j)=PointwiseConv(i,j)(Wp,DepthwiseConv(i,j)(Wd,y)) (15)

wherein W is a convolution kernel, y is an input feature map, i, j is an input feature map resolution, k, l is an output feature map resolution, and m is the number of channels.

Further, the fourth classification layer comprises three full connection layers.

Further, the definition of the full connection layer is as follows:

ai=Wi1*x1+Wi2*x2+…+Win*xn+bi (22)

wherein x isiIs an input of a full connection layer, aiTo output, WijAs weight parameter, biIs a bias parameter.

The invention has the advantages that: the abnormal electroencephalogram signal detection method based on the Rnenyi phase transfer entropy and the lightweight convolutional neural network has better effects in three aspects of accuracy, sensitivity and specificity, and can ensure the accuracy of the model for identifying encephalopathy under the condition of greatly reducing the parameters and the calculated amount of the model.

The present invention will be described in detail below with reference to the accompanying drawings and examples.

Drawings

FIG. 1 is a schematic diagram of data set partitioning.

Fig. 2 is a schematic diagram of a LightlyNet structure.

Fig. 3 is a diagram of electrode placement according to the 10-20 rule.

Fig. 4 is a parameter performance diagram of each CNN.

Detailed Description

To further explain the technical means and effects of the present invention adopted to achieve the intended purpose, the following detailed description of the embodiments, structural features and effects of the present invention will be made with reference to the accompanying drawings and examples.

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

In the description of the present invention, it is to be understood that the terms "center", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "aligned", "overlapping", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention.

The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature; in the description of the present invention, "a plurality" means two or more unless otherwise specified.

Example 1

In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

Aiming at the problems in the prior art, the invention provides an abnormal electroencephalogram signal detection method based on a renyi phase transfer entropy and a lightweight convolutional neural network, which comprises the following steps:

step 1, obtaining EEG data;

step 2, carrying out Rnyi phase transfer entropy processing on the acquired EEG data to obtain a correlation matrix C of the brain function network;

step 3, selecting a threshold value, and binarizing the correlation matrix C to obtain an adjacent matrix A of the brain function network;

and 4, inputting the adjacency matrix A into a lightweight convolutional neural network model, and performing feature extraction and classification in an epilepsy recognition task on the brain function network.

Further, the acquired EEG data is derived from the public electroencephalogram physiological database of boston children's hospital, and can be downloaded from the PhysioNet website:

http:// physionet. org/physiobank/database/chbmit. The database collected electroencephalographic recordings of children with refractory epilepsy. Electroencephalograms were recorded unipolar using Neurocom electroencephalogram 19 channel system (ukraine, XAI-medical). Ag/AgCl electrodes are placed on the scalp according to an international 10-20 system, and the positions are Fp1-F7(1), Fp2-F8(2), F3-C3(3), F4-C4(4), F7-T7(5), F8-T8(6), C3-P3(7), C4-P4(8), T7-P7(9), T8-P8(10), P3-O1(11), P4-O2(12), P7-O1(13), P8-O2(14), Fz-Cz (15), Cz-Pz (16), FT9-FT10(17), FT10-T8(18) and Pz-Oz (19). A schematic of the electrode placement is shown in fig. 3.

The data sets were 1440 each 19 channel seizure data samples and inter-seizure data samples taken for 5 seconds. The positive and negative samples are as follows: 1 balanced configuration, 2800 samples total, the training set samples account for 63% of the total sample set, and the validation set and the test set account for 7% and 30% of the total sample number, respectively. The numbers of samples in training set, validation set and test set used in the experiment are 1816, 200 and 864, respectively. The brain function network is constructed from the acquired data as described above. The correlation matrix of the brain function network is then used as input to the EEGNet.

Further, for variations in the intensity and direction of information flow, phase transfer entropy may be used to enable detection of information variations. However, in the quantitative analysis of the information flow intensity, the shannon entropy cannot correctly analyze the characteristics of the signal because the intensity change is small. And the Rnyi entropy can improve the analysis capability of the Rnyi entropy on signals by adjusting the entropy parameter q. Introducing a Renyi entropy formula into the phase transfer entropy to obtain the Renyi phase transfer entropy.

For a given time sequence x (t), its analytic sequence s (t) can be obtained by hilbert transform, i.e.:

S(t)=X(t)+jH[X(t)]

=A(t)ejθ(t) (1)

where H [ X (t)) ] is the Hilbert transform of X (t), and A (t)) is the amplitude sequence of the analytic signal S (t). θ (t) is the phase sequence of the analytic signal S (t),

the definition of the renyi phase transfer entropy in the step 2 is as follows:

wherein q represents the renyi entropy parameter, and theta (t) is the phase sequence of the analytic signal S (t);

in the above formula, q represents the renyi entropy parameter. The value of q is different, the influence on the result is different, and the influence of an abnormal value or an extreme value on the result can be avoided to a certain extent by adjusting the value of q. In the renyi entropy, different values of the renyi entropy can be calculated by taking different values of q in the same data model. Where, when q → 0, the corresponding RPTE is maximum. When q → 1, the renyi entropy will degenerate into shannon entropy formula according to the law of lubda. q → ∞, corresponding to the minimum RPTE entropy value. I.e. a variation interval of the entropy value is given. As q increases, the Rnyi entropy appears to decrease monotonically. p () represents a probability.

After the incidence matrix is obtained, in order to perform feature extraction on the brain function network represented by the incidence matrix, a proper threshold value needs to be determined to convert the incidence matrix into an adjacent matrix. If the strength of the association relationship between two nodes is greater than the threshold value, the association is set to be '1', the association between the corresponding nodes is considered to exist, otherwise, the two nodes are not connected, and the association is set to be '0'.

From the network perspective, in order to establish a reasonable brain network, the brain network should be guaranteed to have several characteristics, and first, the integrity of the brain network should be guaranteed. Second, the small world nature of the brain network is guaranteed. Finally, a certain degree of network density is guaranteed.

Further, the threshold should be selected to satisfy the following condition:

(1) all nodes in the brain function network are connected, and no independent node exists;

(2) ensuring that the value of the average degree of the brain network meets the condition that the average degree is more than 2lnN, wherein N is the number of nodes in the network;

(3) and ensuring that the functional brain network represented by the adjacency matrix A has the property of small worlds.

The average degree K of one of the complex network attributes is used as an index as a threshold value T ═ T1,t2,...,tnAnd binarizing the incidence matrix of the brain function network. Specifically, a threshold range is selected, i.e., T ∈ [ T ]1,tn](ii) a The research analyzes the change of the mean K value of the brain function network of epileptic seizures and inter-seizure periods with different thresholds; when K satisfies the above condition and at which T the search is, the difference between K for seizures and inter-seizure intervals is large, then T is the threshold of the selected binary correlation matrix.

After selecting the appropriate threshold, the correlation matrix is binarized, i.e.: each element in the correlation matrix has an element value of 1 when greater than the threshold value and an element value of 0 when less than the threshold value. This results in a adjacency matrix containing only 0 and 1.

In the complex network theory, the brain function network is represented by using an Adjacency Matrix (A), the sparsity of the brain function network can be visually displayed firstly, and then the characteristics of the brain function network based on the complex network theory can be obtained through the Adjacency Matrix. The adjacency matrix of the network includes the following properties:

in the undirected graph, the element values of the adjacency matrix are symmetric about the main diagonal, and the element values of the main diagonal are all zero.

And secondly, the degree of the node i in the directed graph, namely the node degree is the sum of the out degree and the in degree. The out-degree is the number of all non-zero elements in the ith row, and the in-degree is the number of all non-zero elements in the ith column.

Third, when the network structure is expressed by the adjacent matrix, the Euclidean space of the square of the number (N) of the nodes is needed in total, namely N2. When represented as an undirected graph, the matrix can be compressed into a euclidean space of size N due to its symmetry. That is, when the matrix is stored, only the data of the upper triangle or the lower triangle of the matrix needs to be maintained. In terms of data compression, from original N2Reducing to N (N-1)/2.

Further, the lightweight convolutional neural network model comprises a first classification layer, a second classification layer, a third classification layer and a fourth classification layer.

Further, the first classification layer comprises two convolution layers and a pooling layer.

In order to identify electroencephalogram signals, a lightweight CNN model with depth separable convolution as a core, namely LigghtlyNet, is designed. In the feature extraction stage, to expand the receptive field, 3 × 3 convolutional layer stacks are used; to reduce the number of model parameters, a depth separable convolution is introduced into the design of the model. In the classification stage, three full-connection layers are designed for feature combination and classification; meanwhile, in order to avoid the overfitting phenomenon of the model, in the full-connection layer, a dropout technology is used based on the principle of random neuron inactivation. The structure of the LightlyNet model is shown in fig. 2, and the network model mainly includes four modules, specifically as follows:

and (3) rolling layers: in CNN, the convolution kernel is essentially an updatable weight matrix. The convolution operation is mainly to carry out convolution operation on a convolution kernel with learnable parameters and the feature map of the previous layer so as to obtain an output feature map. The convolutional layer is mainly used for extracting the characteristics of input signals, and different convolutional kernels extract different characteristics.

Further, the formula of the convolution operation is as follows:

wherein Fi (l)The ith feature map, the (l +1) th feature map F, representing the l-th layer of the CNNi (l+1)The ith feature map F of the ith layer can be passed through convolution operationi (l)Obtained by addition of convolution sums and offsets, bjIs the bias term, kijRepresents the jth convolution kernel of the ith channel, "' in the formula represents convolution operation, and f is the activation function of CNN.

Further, the second classification layer and the third classification layer both comprise a depth separable convolution layer and a pooling layer.

Batch standardization layer and pooling layer: in the network training process, the hidden layer parameter distribution of the model is changed frequently, which can generate larger difference of parameter distribution of different network layers, namely covariate drift. To solve this problem, Batch Normalization (BN) technology is gradually applied in deep neural mesh structure design and optimization. Firstly, in the process of training the deep neural network, the method can solve the problem of covariate drift, so that the training of the deep neural network is more stable; secondly, the BN method can accelerate the convergence speed of the network; finally, BN can also function as regularization. The specific BN operation is to transform the activation value of each neuron in the hidden layer as follows:

x(k)is the k-dimensional input signal of the hidden layer.

The pooling layer, also known as the downsampling layer, is generally followed by the convolutional layer and is a layer that does not contain parameters. The pooling layer is mainly used for reducing the resolution of a feature map obtained by the convolutional layer to obtain features with unchanged space, so that the data processing amount is reduced, and the training speed of the neural network is accelerated. Pooling layer in LightlyNet's network design, for different network structure layers, two methods of maximum pooling and random pooling are employed.

Depth separable convolution: deep-Separable Convolution (DSC) [85] is a method of splitting the standard Convolution by dividing feature extraction and feature aggregation into two steps. In the DSC structure, each channel of input data is subjected to a deep convolution operation, and the output of the deep convolution is linearly connected using a point convolution. The calculation formula is as follows:

the formula for the calculation of the common convolution is:

further, the depth-separable convolution of the depth-separable convolution layer is preceded by a channel convolution, as in equation (13) (whereRepresenting multiplication of corresponding elements), performing point-by-point convolution as shown in formula (14), and finally bringing formula (13) into formula (14) to obtain depth separable convolution as shown in formula (15);

SepConv(Wp,Wd,y)(i,j)=PointwiseConv(i,j)(Wp,DepthwiseConv(i,j)(Wd,y)) (15)

wherein W is a convolution kernel, y is an input feature map, i, j is an input feature map resolution, k, l is an output feature map resolution, and m is the number of channels.

Further, the fourth classification layer comprises three full connection layers.

The network structure can greatly reduce the parameter quantity and the calculated quantity of the model, thereby improving the detection rate under the condition that the detection precision is not obviously changed. Assuming that the input data is M × N, the convolution kernel is K × P, and the step size is 1, the standard convolution parameters are:

WSC=K×K×N×P (16)

and the corresponding calculated amount is:

OSC=M×M×K×K×K×P (17)

the parameters of DSC are:

WDSC=K×K×N+N×P (18)

and the corresponding calculated amount is:

ODSC=M×M×K×K×N+M×N×P (19)

thus, the ratios of the two structures to the number of parameters and the calculated amount are:

in LightlyNet network, when the design uses convolution kernel with 5 × 5 size, i.e. K is 5, the parameter amount of DSC can be reduced to about the original amount compared with the conventional convolution process

A classification layer: the LightlyNet network model uses three fully connected layers as the classification layer. The full connection layer is equivalent to the hidden layer part of the multilayer perceptron model, and the neuron of the next layer is connected with each neuron of the previous layer; the neurons in the same layer are independent and have no connection relation. In the LightlyNet model classification layer, the input of the LightlyNet model classification layer is a one-dimensional feature vector developed by the features learned by the CNN, and the output of the LightlyNet model classification layer is weighted summation of the feature vectors.

Further, the definition of the full connection layer is as follows:

ai=Wi1*x1+Wi2*x2+…+Win*xn+bi (22)

wherein x isiIs an input of a full connection layer, aiTo output, WijAs weight parameter, biIs a bias parameter.

In summary, the abnormal electroencephalogram signal detection method based on the Rynyi phase transfer entropy and the lightweight convolutional neural network has better effects in three aspects of accuracy, sensitivity and specificity, and can ensure the accuracy of the model for identifying encephalopathy under the condition of greatly reducing the parameters and the calculated amount of the model.

Example 2

Model scale analysis

And evaluating the whole model, calculating the convolutional layer, the offset layer, the full connection layer and the like involved by the LightlyNet model layer by layer, and then summing.

Three classic CNNs are adopted for comparison with LightlyNet, the 3-channel 32 x 32 tensor is uniformly adopted for input, a summary resource packet is introduced for analysis, and the result of model scale is shown in FIG. 4.

The four convolutional neural network models were analyzed for parameters and calculations as in fig. 4. From the results shown in fig. 4(a), it can be seen that the LightlyNet model designed in this chapter has a parameter number of 408842, while the classical LeNet model has a parameter number of 121182, while the lightweight models MobleNetV2 and ShuffleNetV2 have parameter numbers of 3504872 and 1255654, respectively. Compared with the MobleNetV2 model, the LightlyNet model has the advantage that the parameter quantity is reduced by 88.35%, and compared with the popular lightweight deep neural network model, ShuffleNet V2, the parameter quantity is reduced by 67.44%. From the perspective of the memory occupation of the model, the RAM overhead of LightlyNet is greatly reduced compared with that of two popular classical lightweight models.

In fig. 4(b), the 3-channel 32 × 32 tensor as input, the amount of LightlyNet calculation is 1.22M, while the amounts of LeNet, MobleNetV2, and ShuffleNetV2 calculation are divided into 0.36M,7.79M, and 3.06M. The calculated amount of LightlyNet is reduced by 84.34% and 60.13% compared to the MobleNetV2 and ShuffleNetV2 models, respectively. Therefore, the LightlyNet model is still greatly reduced in computational complexity compared with the popular lightweight model.

Example 3

RPTE brain function network recognition result

Aiming at a classical LeNet model, a ShuffleNet V2 model, a MobileNet V2 model and a LightlyNet model provided in the present chapter, the brain function network constructed under different entropy parameters q is subjected to an epilepsy recognition task test.

The table shows that when the R é nyi parameter q is 0.1, the brain function network constructed by RPTE is input as CNN model. On the same test set, four classical convolutional neural networks of LeNet, LightlyNet, ShuffLeNet V2 and MobileNet V2 are adopted in the section for feature extraction and classification, and the classification result is shown in the table I. In terms of sensitivity, LightlyNet performs optimally at 99.77%. In specificity, LightlyNet was only 0.46% lower than shefflenetv 2. And combining the results of the parameters and calculated quantities of the four CNN models in the upper section, the LightlyNet has slightly lower specificity under the condition of the same accuracy as the mainstream lightweight model, which shows that the model design is more effective.

Identification effect of constructing brain function network by RPTE (RPTE with q being 0.1)

And the second table is that when the R & ltnyi & gt parameter q is 0.5, the brain function network constructed by the RPTE is used as the input of the convolutional neural network model. Using the same test set sample, LightlyNet was optimal in both accuracy and specificity, 99.65% and 99.77%, respectively. In sensitivity, LightlyNet was 0.23% lower than MobileNetV 2. Combined with the conclusion that MobileNetV2, the upper section, is 8.57 times and 6.38 times higher than the LightlyNet parameters and calculated amounts, respectively. It can be seen that LightlyNet has a strong classification performance despite the small size of the model.

Identification effect of RPTE (RPTE with table two q ═ 0.5) for constructing brain function network

When the R nyi parameter q → 1, as shown in Table three. RPTE degenerates to PTE, which is a special case of RPTE. This section applies RPTE in this case to construct a brain function network, and then uses the brain function network as an input of CNN. It can be seen that ShuffleNet V2 and MobileNet V2 are all stronger than LightlyNet in accuracy, sensitivity and specificity indexes, but LeNet has weaker recognition performance than LightlyNet.

Identification effect of PTE (protein Engineers) of table III q → 1 for constructing brain function network

When the renyi parameter q takes 1.5, as shown in table four. The LightlyNet sensitivity ratio is highest, 96.53%, i.e. in terms of epilepsy detection leak diagnosis, it is less likely to leak diagnosis. The accuracy is lower than that of ShuffleNet V2 and MobileNet V2 by 0.58 percent and 0.81 percent.

Identification effect of RPTE in table 1.5 with q ═ 1.5 for constructing brain function network

When the renyi parameter q takes 2.5, as shown in table five. LightlyNet accuracy and sensitivity were highest at 94.79% and 95.83%, respectively, while specificity was 1.16% and 1.39% lower than that of ShuffleNet V2 and MobileNet V2.

Identification effect of RPTE (RPTE-2

In summary, this section uses four convolutional neural network models to respectively study the performance impact on epilepsy recognition when RPTEs with different renyi parameters are used as inputs. The result shows that the LightlyNet model designed in this chapter can ensure the accuracy of epilepsia identification of the model under the condition of greatly reducing the parameters and the calculated amount of the model.

Example 4

Comparison of recognition results of different brain function network construction methods

1) Aiming at epileptic signals, experimental researches are carried out on a brain function network constructed by different Rynyi parameters and a brain function network constructed by phase locking values by using a LightlyNet method, and the results are shown in a table six.

Firstly, a brain function network constructed by different 'edge' analysis methods is used as input, then LightlyNet training is iterated for 200 times and then exported, and finally, the trained LightlyNet model is applied to a test set to carry out effect test. The classification results on the test set are shown in table six. The accurate identification of epileptic seizures and inter-seizure intervals is carried out, namely the accuracy is the index. Using a brain function network constructed by RPTE as an input, and when q is 0.5, the LightlyNet recognition accuracy rate is up to 99.65%; when q is 2.5, the LightlyNet recognition accuracy is at least 94.79%. And when the brain function network constructed by PLV is used as input, the LightlyNet model shows 92.96% accuracy. Meanwhile, the method of phase locking value is lower than 94.79%, which shows that the worst case of RPTE performance is also higher than the method of phase locking value. The effectiveness of RPTE in constructing a brain function network is shown.

Table six test set classification performance

The sensitivity is the performance that the classifier judges the sample as the ratio of the epileptic seizures to the epileptic seizures samples, namely whether the epileptic seizures are missed or not, and the RPTE method shows the effect of being up to 99.77 percent when q is 0.5. The worst RPTE method is 95.83% sensitivity when q is 2.5. Whereas the sensitivity of 96.30% of the phase-locked value is only greater than when q is 2.5.

Specificity in the context of medical statistics is the classifier's determination of the ratio of inter-episode to inter-episode sample numbers. I.e., from a misdiagnosis perspective, indicating accurate detection of inter-episode intervals by the model. LightlyNet is applied to a brain function network of RPTE, and the performance is up to 99.54% when q is 0.5; when q is 1.5 and 2.5, there is also 93.75% specificity, apparently stronger than 90.05% of the phase lock value. In conclusion, RPTE is a phase transfer entropy which is more generalized on the basis of Ryenyi entropy, and the accuracy, specificity and sensitivity of RPTE are higher than those of a method for constructing a brain function network by PTE. Meanwhile, a phase-locked value method is used for comparison, and a brain function network constructed by RPTE is more effective in epilepsy identification than a PLV method.

Results of the experiment

The present study proposes a LightlyNet method based on a deep separable convolution. First, from the perspective of model size, parameter-to-calculated quantity comparative analysis was performed on LightlyNet versus classical CNN. The calculated amount of LightlyNet was reduced by 84.34% and 60.13% compared to MobleNetV2 and ShuffleNetV2, respectively. Compared with the ginseng quantity, the reduction is 67.44 percent and 88.35 percent respectively.

The identification of epileptic EEG signals was performed using LightlyNet. Research shows that the brain function network constructed by using the Rynyi phase transfer entropy of q-0.5 has the best effect. The performance of the three aspects of accuracy, sensitivity and specificity is respectively 99.65%, 99.77% and 99.54%. The brain function network method constructed by the phase-locked values respectively shows 82.96%, 96.30% and 90.05% of performances in three performance indexes of accuracy, sensitivity and specificity, and the LightlyNet model designed by the method has a certain practical application value.

It should be noted that the drawings of the present embodiment are in a very simplified form and all use non-precise ratios, which are only used for convenience and clarity to aid in the description of the embodiments of the present invention.

It should be understood that the above-mentioned embodiments are merely illustrative of the technical concepts and features of the present invention, which are intended to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and therefore, the protection scope of the present invention is not limited thereby. All equivalent changes and modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.

The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:脑电信号分类方法、装置、设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!