End-to-end non-contact atrial fibrillation automatic detection system and method based on vPPG signal

文档序号:836932 发布日期:2021-04-02 浏览:24次 中文

阅读说明:本技术 一种基于vPPG信号的端到端的非接触房颤自动检测系统和方法 (End-to-end non-contact atrial fibrillation automatic detection system and method based on vPPG signal ) 是由 杨学志 张姁 刘雪南 王定良 于 2020-12-08 设计创作,主要内容包括:本发明公开了一种基于vPPG信号的端到端的非接触房颤自动检测系统,包括:数据预处理模块,用于录制待测用户的脸部视频,并将录制的视频去除开头和结尾噪声干扰大的位置,降采样,再截取成统一长度和大小;脉搏波提取模块,通过P3D卷积神经网络提取脸部视频的vPPG信号;数据降噪模块,基于FCN-DN的神经网络对vPPG数据去除噪声;房颤检测模块,首先用于训练模型使其学会划分房颤片段和非房颤片段,再将待测vPPG信号片段输入到训练好的房颤检测模型,从而判断待检测的vPPG信号是否包含房颤片段。本发明提出的联合包含注意力机制的长短时记忆力网络和卷积网络的并联网络使得模型的检测全面、精度高、效果好。(The invention discloses an end-to-end non-contact atrial fibrillation automatic detection system based on a vPPG signal, which comprises: the data preprocessing module is used for recording the face video of the user to be detected, removing the positions with large noise interference at the beginning and the end of the recorded video, performing down-sampling, and then intercepting the video into a uniform length and a uniform size; the pulse wave extraction module is used for extracting a vPPG signal of the face video through a P3D convolutional neural network; the data noise reduction module is used for removing noise from the vPPG data based on the FCN-DN neural network; the atrial fibrillation detection module is used for training a model to divide atrial fibrillation fragments and non-atrial fibrillation fragments, and inputting vPPG signal fragments to be detected into the trained atrial fibrillation detection model so as to judge whether the vPPG signals to be detected contain the atrial fibrillation fragments. The parallel network combining the long-time memory network and the short-time memory network with the attention mechanism and the convolutional network provided by the invention enables the detection of the model to be comprehensive, high in precision and good in effect.)

1. An end-to-end non-contact atrial fibrillation automatic detection system based on a vPPG signal, comprising: the data preprocessing module is used for recording the face video of the user to be detected, removing the positions with large noise interference at the beginning and the end of the recorded video and then intercepting the video into a uniform length; the pulse wave extraction module is used for extracting a vPPG signal of the face video through a P3D convolutional neural network; the data noise reduction module is used for removing noise from the vPPG data based on the FCN-DN neural network; the atrial fibrillation detection module is used for training the model to enable the model to learn division of atrial fibrillation fragments and non-atrial fibrillation fragments;

the pulse wave extraction module is a residual network formed by a P3D convolutional neural network, the P3D convolutional network is firstly convolved in an image space and then convolved in a time dimension, and the convolution kernel sizes of network blocks in the P3D convolutional network are 1 × 1,1 × 3 and 3 × 1 respectively;

the data noise reduction module comprises an encoder and a decoder, wherein the encoder consists of a cascaded convolution layer and a batch normalization layer and is used for realizing the compression noise reduction of the vPPG signal, and the decoder is in an antisymmetry structure of the encoder and consists of an anti-convolution layer and an activation layer and is used for reconstructing the vPPG signal;

the atrial fibrillation detection module divides a plurality of vPPG signals marked with atrial fibrillation and non-atrial fibrillation into a training set and a test set, the training set is input into an atrial fibrillation detection model, the test set verifies accuracy, parameters are finely adjusted, and a trained atrial fibrillation detection model is obtained;

the atrial fibrillation detection module is used for inputting the preprocessed face video to be detected into the pulse wave extraction module and the data noise reduction module to obtain a vPPG signal segment; and inputting the vPPG signal segment into a trained atrial fibrillation detection model so as to judge whether the vPPG signal to be detected contains the atrial fibrillation segment.

2. The system of claim 1, wherein the atrial fibrillation detection module comprises: the system comprises an input layer, two joint training networks, a fusion layer and a classification layer; one of the two combined training networks is a long-time memory network LSTM block, and the other training network is a convolution network block.

3. The system of claim 2, wherein the long duration memory network block comprises: a long-short time memory layer of a foundation, a long-short time memory layer of an attention mechanism and a down-sampling layer.

4. The system of claim 3, wherein said convolutional network block comprises: one-dimensional convolutional layers, batch normalization layers and maximum pooling layers.

5. The system of claim 4, wherein the one-dimensional convolutional layers are highly parallel and can extract intrinsic features of the input data, and are formed by cascading three one-dimensional convolutional layers 128 x 8, 256 x 5 and 128 x 3 and batch normalization layers.

6. The system of claim 5, wherein the fusion layer is a concatenation of weights generated by the convolutional networking module and the long-term memory networking module.

7. The system of claim 5, wherein said classification layer is a fully connected layer, and classifies data into atrial fibrillation fragments and non-atrial fibrillation fragments by means of a softmax function.

8. The system of claim 7, wherein the Adam optimizer is used to update the training parameters of the model, BachSize size is 128, and the loss function is a cross entropy loss function.

9. An end-to-end non-contact atrial fibrillation automatic detection method based on a vPPG signal, said method being based on a system according to any one of claims 1 to 8, characterized in that: the method comprises the following steps:

s1, carrying out face video acquisition, and recording a face video of the user to be detected by using a data acquisition module;

s2, preprocessing the recorded face video by using a data preprocessing module;

s3, judging whether the atrial fibrillation detection model is trained or not, if not, turning to the step S41, and if so, turning to the step S51;

s41, dividing a plurality of data labeled with atrial fibrillation and non-atrial fibrillation into a training set and a test set;

s42, the pulse wave extraction module and the data noise reduction module are built, a vPPG signal is extracted from training data by using the pulse wave extraction module, and noise is removed from the vPPG data by using the data noise reduction module.

S43, inputting vPPG signals obtained after processing in the training set and the test set into a model training module for model training, adjusting model parameters to finally obtain a trained atrial fibrillation detection model, storing the model parameters, and returning to the step S1;

s51, reading the trained atrial fibrillation detection model and corresponding model parameters thereof;

and S52, processing the face video data obtained in the S2 by using a pulse wave extraction module and a data noise reduction module, inputting the processed vPPG signal into a loaded atrial fibrillation detection model, automatically detecting the model to obtain a detection result, outputting the detection result, and returning to the step S1.

Technical Field

The invention belongs to the technical field of medical data analysis, and particularly relates to an automatic atrial fibrillation detection system and method for a face video based on a P3D convolution and an attention mechanism long-time memory network.

Background

Atrial fibrillation is an arrhythmia resulting from abnormal activity of the heart. In elderly patients over 80 years of age, the incidence of atrial fibrillation is typically between 10% and 17%. In 2020, the number of patients with atrial fibrillation reaches 3300 ten thousand worldwide. The later stage of atrial fibrillation is usually accompanied by cardiovascular diseases such as thrombus and heart failure. Therefore, the method is important for early discovery and prevention of atrial fibrillation signals. Atrial fibrillation is clinically confirmed by a physician by analysis of 12-lead ECG (electrocardiogram) signals, which are diagnosed with irregular heart rates lasting more than 30 seconds, accompanied by P-wave disappearance. Particularly, the diagnosis of paroxysmal atrial fibrillation needs 24-hour dynamic electrocardiogram monitoring and diagnosis, and an electrode plate needs to be attached to the skin of a tested person, so that the problems of complex operation, few applicable scenes and high detection cost exist.

The PPG (Photoplethysmography) technology widely existing on the sports bracelet records the tiny change of skin color caused by the blood volume change of the artery and the capillary vessel under the skin through a bracelet photoelectric receiving tube, extracts pulse waves through a specific algorithm, and ensures that the blood flow of the part to be measured is smooth and is tightly attached to a photoelectric receiver without light leakage. The vPPG (video Photoplethysmography) is a video-based Photoplethysmography, and a mobile phone camera or a common consumer-grade camera is used for recording the tiny change of skin color caused by the blood volume change of facial capillaries, so that the method has the advantages of low cost, non-contact, simplicity and convenience, and can be used for monitoring the heart rate of the old or the weak infant during sleeping.

The traditional atrial fibrillation detection algorithm needs to extract features of time domain, frequency domain or time-frequency domain of RR interval (peak interval) of an ECG signal, such as fast Fourier transform, wavelet transform and the like, and then screen more important features, and the important features are classified by a threshold-based classification method, nearest neighbor method, support vector machine, decision tree and the like. Therefore, the traditional atrial fibrillation detection has the problems of complicated feature extraction steps, high detection cost and low efficiency. In addition, the existing deep learning atrial fibrillation detection method generally learns a large number of ECG data sets based on a convolutional neural network, and for vPPG data, a large data set of an atrial fibrillation patient is lacked, and the problem of low learning efficiency exists. No methods or techniques to solve or improve upon the above problems have been found.

Disclosure of Invention

Aiming at the defects or improvement requirements of the existing method, the invention provides an end-to-end non-contact atrial fibrillation automatic detection system and method based on a vPGG signal, and solves the problems that in the prior art, the occurrence of atrial fibrillation is monitored by remotely and non-contact extracting pulse waves from a face video, the time and space correlation of the pulse signals are not considered at the same time, the detection precision is low due to the fact that irregular segments occupy larger weight in model training is not considered, and the detection cost is high, the operation is complex, and the detection is not suitable for daily monitoring and the like by using 12-lead ECG.

In order to achieve the above object, the present invention provides an end-to-end non-contact atrial fibrillation automatic detection system based on vPPG signal, comprising: the data acquisition module is used for recording a face video of the user to be detected; the data preprocessing module is used for removing the positions of the recorded video with large noise interference at the beginning and the end, reducing the sampling, intercepting the video into a uniform length and a uniform size, and manufacturing corresponding one-hot labels aiming at atrial fibrillation and non-atrial fibrillation data; the face video pulse wave extraction module is used for extracting the pulse wave signals of the processed face video through a P3D convolutional neural network; the data noise reduction module is used for removing noise from the vPPG data based on the FCN-DN neural network; the atrial fibrillation detection module is used for training the model to enable the model to learn division of atrial fibrillation fragments and non-atrial fibrillation fragments;

the pulse wave extraction module extracts pulse waves through training of the P3D convolutional network, and compared with a 2D convolutional layer, the P3D convolutional network extracts spatial and time dimension characteristics of adjacent frames of a video at the same time, and the calculation efficiency is high by taking the characteristics of a residual error network as reference.

The data noise reduction module comprises an encoder and a decoder, wherein the encoder consists of a cascaded convolution layer and a batch normalization layer and is used for realizing the compression noise reduction of the vPPG signal, and the decoder is in antisymmetry with the encoder and consists of an anti-convolution layer and an activation layer and is used for reconstructing the vPPG signal;

the atrial fibrillation detection module divides a plurality of vPPG signals marked with atrial fibrillation and non-atrial fibrillation into a training set and a test set, the training set is input into an atrial fibrillation detection model, the test set verifies accuracy, parameters are finely adjusted, and a trained atrial fibrillation detection model is obtained; the atrial fibrillation detection model is a training model combining a long-time memory network and a short-time memory network based on an attention mechanism and a convolution network;

the atrial fibrillation detection module is used for inputting a face video to be detected into the data preprocessing module, the pulse wave extraction module and the data noise reduction module to obtain a vPPG signal segment; and inputting the vPPG signal segment into a trained atrial fibrillation detection model so as to judge whether the vPPG signal to be detected contains the atrial fibrillation segment.

Further, the atrial fibrillation detection module includes: the device comprises a shape conversion layer, two joint training networks, a fusion layer and a classification layer which are connected in sequence; one of the two joint training networks is a long-time memory network (LSTM) block, and the other is a convolution network block.

Further, the size of the input layer is 1024 × 1. The long and short time memory network block comprises: a long-short time memory layer of a foundation, a long-short time memory layer of an attention mechanism and a down-sampling layer.

Further, the convolutional network block includes: one-dimensional convolutional layers, batch normalization layers and maximum pooling layers. The one-dimensional convolutional layer is highly parallel, can extract the intrinsic characteristics of input data, and is formed by cascading 128 × 8, 256 × 5 and 128 × 3 three-layer one-dimensional convolutional layers and batch normalization layers.

Further, the fusion layer is formed by splicing the weights generated by the convolution network module and the long-time memory network module.

Further, the classification layer is a full connection layer, and data are classified into atrial fibrillation fragments and non-atrial fibrillation fragments through a softmax function.

Further, the Adam optimizer is used to update the training parameters of the model, BachSize size is 128, and the loss function is a cross-entropy loss function.

The invention provides an end-to-end non-contact atrial fibrillation detection system based on a vPPG signal, which is used in the field of medical data analysis and also provides an end-to-end non-contact atrial fibrillation automatic detection method based on the vPPG signal.

Compared with the prior art, the scheme of the invention has the following beneficial effects:

1) the invention provides a remote non-contact atrial fibrillation automatic detection system of a vPPG signal based on a face video, which comprises a face video preprocessing module, a pulse wave extraction module, a data noise reduction module and an atrial fibrillation detection module; the face video preprocessing module is used for down-sampling data, intercepting fragments with uniform sizes and making corresponding labels; the pulse wave extraction module learns the facial pulse wave through a P3D convolutional network; the automatic noise reduction module removes noise from the acquired vPPG signal through an encoder and a decoder; the atrial fibrillation detection module is a parallel network of a long-time memory network and a convolution network of a combined attention mechanism. The method can accurately identify the fragments of the atrial fibrillation by training the atrial fibrillation patients collected in Anhui province, province and vertical hospitals.

2) Compared with the existing automatic atrial fibrillation detection system based on the algorithm of machine learning and deep learning, the atrial fibrillation detection model is characterized in that data of patients suffering from atrial fibrillation are collected by a hospital for training, pulse waves are extracted from face videos by utilizing a P3D convolution network, noise interference is removed through an FCN-DN noise reduction network, the complex steps of manually setting threshold filtering noise for atrial fibrillation data and manually selecting atrial fibrillation data features are omitted, whether atrial fibrillation occurs or not can be directly detected from the face videos, and the system has the characteristics of remote non-contact, accuracy and high efficiency.

3) The parallel network combining the long-time memory network with the attention mechanism and the convolution network, which is provided by the invention, enables the model to extract the time-related characteristics of the time-allowed data single variable through the long-time memory network with the attention mechanism and extract the global characteristics of the data multivariable through the convolution network, and the two networks are complementary, so that the detection is comprehensive, the precision is high, and the effect is good.

Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.

Drawings

The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention and not to limit the invention.

Fig. 1 is a schematic structural diagram of an end-to-end non-contact atrial fibrillation automatic detection system based on vPPG signals according to an embodiment of the present invention;

fig. 2 is a flowchart of an algorithm of an end-to-end non-contact atrial fibrillation automatic detection system based on vPPG signals according to an embodiment of the present invention;

FIG. 3 is a block diagram of a P3D convolutional network model according to an embodiment of the present invention;

fig. 4 is a block diagram of a model of a FCN-DN noise reduction network according to an embodiment of the present invention;

fig. 5 is a diagram of a parallel network model of the long-term memory network and the convolutional network of the joint attention mechanism according to an embodiment of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.

The invention provides an end-to-end remote non-contact atrial fibrillation detection automatic detection system based on a vPPG signal, as shown in fig. 1, comprising: the device comprises a face video preprocessing module, a pulse wave extraction module, a data noise reduction module and an atrial fibrillation detection module.

The face video preprocessing module is used for recording a face video of a person for 2 minutes. Preferably, a common mobile phone camera or a consumer-grade camera is adopted; during recording, the head of a person is kept as free as possible from violent shaking. Secondly, removing the places with large noise interference at the beginning and the end of the face video collected at the positions (such as indoors, hospitals and the like), performing down-sampling, then intercepting the face video to obtain uniform length and size, manufacturing corresponding one-hot labels according to collected fingertip pulse waves aiming at atrial fibrillation and non-atrial fibrillation data, reducing the resolution of the video, and trimming the size of each frame of image in the video.

And the pulse wave extraction module is used for extracting a vPPG signal from the processed face video recorded by the camera through a P3D convolutional neural network. The extraction method specifically comprises the steps of taking a face video as a training set, inputting the training set into a P3D convolution network, taking a real pulse wave of a fingertip as a label, taking a random gradient descent algorithm as an optimization algorithm, and taking a mean square error as a loss function. And training the network to learn and generate corresponding face video pulse wave vpPPG.

And the data noise reduction module is used for removing noise from the vPPG data based on a neural network of FCN-DN (full connection noise reduction self-coding). Self-coders are commonly used in the fields of speech signal noise reduction, data compression, and the like. The module comprises an encoder and a decoder, wherein the encoder consists of a cascaded convolution layer and a batch standardization layer, compression and noise reduction of the vPPG signal are realized, the structure of the decoder is antisymmetry of the encoder, and the decoder consists of an deconvolution layer and an activation layer and reconstructs the vPPG signal.

And the atrial fibrillation detection module is used for training the model to enable the model to learn to divide atrial fibrillation fragments and non-atrial fibrillation fragments. Firstly, dividing a plurality of vPPG signals marked with atrial fibrillation and non-atrial fibrillation into a training set and a testing set, inputting the training set into an atrial fibrillation detection model, verifying accuracy of the testing set, and finely adjusting parameters to obtain the trained atrial fibrillation detection model. The atrial fibrillation detection model is a combined attention-based long-time memory network and convolution network atrial fibrillation detection model. And (3) inputting the preprocessed face video to be detected into a pulse wave extraction module and a data noise reduction module to obtain a vPPG signal segment after the model is trained. And inputting the vPPG signal segment into a trained atrial fibrillation detection model so as to judge whether the vPPG signal to be detected contains the atrial fibrillation segment.

Fig. 2 shows a flow chart of an algorithm of an end-to-end non-contact atrial fibrillation automatic detection system based on a vPPG signal. As shown in fig. 2, the end-to-end non-contact atrial fibrillation automatic detection method based on vPPG signal includes the following steps:

and S1, carrying out face video acquisition. And recording a face video of the user to be detected for 2 minutes by using the face video acquisition module.

And S2, preprocessing the recorded face video by using a face video preprocessing module.

S3, judging whether the atrial fibrillation detection model is trained, if not, turning to the step S41, and if so, turning to the step S51.

S41, dividing a plurality of data labeled with atrial fibrillation and non-atrial fibrillation into a training set and a test set;

s42, building the pulse wave extraction module and the data noise reduction module, extracting a vPPG signal from training data by using the pulse wave extraction module, and removing noise from the vPPG data by using the data noise reduction module.

S43, inputting the vPPG signals correspondingly processed in the training set and the test set into an atrial fibrillation detection module for atrial fibrillation detection to obtain a trained atrial fibrillation detection model, storing model parameters, and returning to the step S1.

S51, reading the trained atrial fibrillation detection model and corresponding model parameters thereof;

and S52, processing the face video data obtained in the S2 by using a pulse wave extraction module and a data noise reduction module, inputting the processed vPPG signal into a loaded atrial fibrillation detection model, automatically detecting the model to obtain a detection result, outputting the detection result, and returning to the step S1.

Further, the pulse wave extraction module extracts pulse waves from the face video segments based on the P3D convolutional network. The P3D convolutional network can be placed before or after the FCN noise reduction stage as needed. The sizes of convolution kernels of the network blocks in the P3D convolution network are 1 × 1(I), 1 × 3(S), and 3 × 1(T), respectively, and the structure of the network block is as shown in fig. 3, in which the residual cells of the residual network are replaced with P3D network blocks by using the residual network. The P3D network is first convolved in image space and then in the time dimension, as shown in equation (1)

(I+T·S)·Xt:=Xt+T(S(Xt))=Xt+1 (1)

Where S (-) denotes the convolution of the image space, T (-) denotes the convolution in the time dimension, I is the identity matrix, XtIndicating the output at time t.

Further, as shown in fig. 4, the data denoising module denoises the vPPG data segment based on the encoder and decoder network. The encoder is formed by cascading different convolution blocks, and each convolution block is formed by a convolution layer and a batch standardization layer. Wherein the batch normalized activation function is a reLu function. The encoder is commonly used in the fields of data compression and speech signal denoising, and compresses and maps input data x to a potential spatial representation z to realize data compression and denoising. The decoder is antisymmetric of the encoder and consists of different deconvolution blocks, wherein each deconvolution block consists of a deconvolution layer and a batch normalization layer. The decoder reconstructs as output by the above-mentioned spatial representation zAs shown in equations (2) and (3), wherein f (·), g (·) represents a nonlinear activation function, commonly known as sigmoid function, reLu function. W, b denote the encoder weight matrix and the bias matrix.Representing the weight matrix and the bias matrix of the decoder. The objective loss function is (4). The module takes a vPPG signal sample added with random noise as input, takes a network without noise as a reference, and further removes the noise by training and learning the characteristics of the noise.

z=f(Wx+b) (2)

FIG. 5 is a schematic diagram of an atrial fibrillation detection module with a training network that employs a parallel network of long-term and short-term memory networks and convolutional networks in a combined attention mechanism. The attention mechanism identifies information important for atrial fibrillation detection by carrying out weighted average on states of long-time memory network hidden layers.

The atrial fibrillation detection module comprises: the method comprises a shape transformation layer, two joint training networks (one is a long-time memory network (LSTM) block, and the other is a convolution network block), a fusion layer and a classification layer. The size of the Input layer (Input transform block in fig. 5) is 1024 × 1.

The long and short time memory network block comprises: a basic long-short memory Layer (LSTM), an Attention-based long-short memory layer (Attention + LSTM), and a down-sampling layer. The long-time and short-time memory network is used for extracting the intrinsic characteristics of the time sequence signals, and the memory of the long-time sequence characteristics is not forgotten. Attention machines have many classifications, the earliest being in computer vision and then widely applied to the field of natural language processing; the weight can be added aiming at the network parameters for improving the classification effect, and interested places can be noticed, so that the model learning speed is faster and the effect is better.

The convolutional network block comprises: one-dimensional convolutional layers, batch normalization layers and maximum pooling layers. The one-dimensional convolutional layer is highly parallel, can extract the intrinsic characteristics of input data, and is formed by cascading 128 × 8, 256 × 5 and 128 × 3 three-layer one-dimensional convolutional layers and batch normalization layers, wherein the activation functions are relu functions. And the distribution of the middle layers of the neural network is standardized in batch standardization layer, the problem of gradient disappearance is improved, and the learning rate is increased.

And the fusion layer Concate is formed by splicing the weights generated by the convolution network module and the long-time memory network module. The classification layer is a full connection layer, and classifies data into atrial fibrillation fragments and non-atrial fibrillation fragments through a softmax function.

And the optimizer in the atrial fibrillation detection process updates the training parameters by using an Adam optimizer, the BachSize size is 128, and the loss function is a cross entropy loss function.

The long-time and short-time memory network of the attention mechanism consists of a basic LSTM network layer, an LSTM layer and a Dropout layer based on the attention mechanism, wherein the Dropout layer prevents over-fitting of a model.

The LSTM is one of RNN (cyclic convolutional network) networks, and can solve the problem of RNN network long-distance gradient disappearance. LSTM comprises three gates, including a forgetting gate ftInput door itAnd an output gate otForgetting, inputting and outputting of information are determined by a sigmoid function. The specific t time is calculated as follows:

ft=δ(Wf·[ht-1,xt]+bf) (5)

it=δ(Wi·[ht-1,xt]+bi) (6)

ot=δ(Wo·[ht-1,xt]+bo) (7)

ht=ot*tanh(Ct) (10)

wherein h istIndicating a hidden layer, CtRepresenting layers of cell states that carry information. Wherein Wf,Wi, Wo,WcIs a weight matrix. bf,bi,bo,bcIs a bias matrix, and the tanh function is data normalization to [ -1, 1 []In the meantime.

Attention mechanism by making more weightings associated with atrial fibrillationComputing context vector c by weighted summationi

Where h is the hidden layer of the LSTM layer, aijIs the weight of each hidden layer, ciIs a context vector for time T, TmIs the total time;

wherein eijIs to calculate the degree of matching, i.e. similarity,

eij=score(hi-1,hj) (13)

score=hi-1 Thj (14)

and finally, connecting the long-time memory network and the convolutional network of the attention system together through a fusion layer, and finally passing through a classification layer, wherein the classification layer predicts the probability of the atrial fibrillation through a softmax function by a full connection layer, and the trained loss function is a cross entropy loss function.

The cross entropy loss function is:

where L represents the loss function, N represents the total number of samples of the vPPG signal segment, y(i)A label representing the sample i is attached to the sample i,representing the prediction of the probability of occurrence of sample i by the model.

As an embodiment, a data set is made by using a vPPG signal of an atrial fibrillation patient and a vPPG signal of a normal person acquired by cooperation with a hospital, and the data set is used as input to iteratively train a weight matrix and a bias term of an atrial fibrillation detection model, so that the trained atrial fibrillation detection model is finally obtained.

Through cooperation with the province hospital of Anhui province and province in Anhui, establish the database of patient's of atrial fibrillation vPPG signal, gathered 80 facial videos of patient's of atrial fibrillation and corresponding finger tip pulse wave for to the detection of atrial fibrillation. The verification proves that the automatic detection method is accurate in automatic detection result and efficient and convenient in detection process.

According to the end-to-end atrial fibrillation automatic detection method based on the vPPG signal and aiming at the facial video, the neural network focuses on learning irregular segments of atrial fibrillation, and the learning rate is higher; the intrinsic characteristics of the time sequence can be learned based on the long-time memory network, and the global characteristics of the sequence can be learned in parallel based on the convolutional network, so that atrial fibrillation detection is suitable for more scenes, higher in accuracy and lower in cost.

Although the embodiments of the present invention have been described above, the above description is only for the convenience of understanding the present invention, and is not intended to limit the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the scope of the present invention should be determined by the following claims.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:脑电双频谱监测信号采集装置和系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!