Underwater sound FBMC communication signal detection method based on deep learning

文档序号:1130713 发布日期:2020-10-02 浏览:4次 中文

阅读说明:本技术 一种基于深度学习的水声fbmc通信信号检测方法 (Underwater sound FBMC communication signal detection method based on deep learning ) 是由 朱雨男 王彪 聂星阳 葛慧林 刘雨佶 于 2020-05-21 设计创作,主要内容包括:本发明公开的一种基于深度学习的水声FBMC通信信号检测方法。利用训练完成的深度神经网络模型(DNN)取代传统水声FBMC通信系统接收端中的信道估计、均衡等模块,打破系统的模块化限制,自适应地学习水声信道状态信息,避免原本系统固有的虚部干扰影响,提高系统的误码率性能。本发明的有益效果:本发明在传统水声FBMC通信系统的接收端用一个训练完善的DNN代替原有的信道估计、均衡等过程。利用DNN的训练阶段获取水声信道状态信息,在测试阶段实现信号的解调恢复。在此基础上本发明又引入Adam权重更新策略和L2正则化方法优化DNN模型,进一步提升DNN的收敛效率和估计精度,本发明相较于现有基于信道估计的方法在精度和复杂度方面具有一定的优越性。(The invention discloses an underwater sound FBMC communication signal detection method based on deep learning. The trained deep neural network model (DNN) is used for replacing modules such as channel estimation and equalization in the receiving end of the traditional underwater sound FBMC communication system, the modularization limit of the system is broken, underwater sound channel state information is learned in a self-adaptive mode, the inherent imaginary part interference influence of the original system is avoided, and the error rate performance of the system is improved. The invention has the beneficial effects that: the invention uses a DNN with perfect training to replace the original processes of channel estimation, equalization and the like at the receiving end of the traditional underwater sound FBMC communication system. And acquiring underwater acoustic channel state information by utilizing a DNN training stage, and realizing demodulation and recovery of signals in a testing stage. On the basis, the invention also introduces an Adam weight updating strategy and an L2 regularization method to optimize the DNN model, further improves the convergence efficiency and the estimation precision of the DNN, and has certain superiority in the aspects of precision and complexity compared with the existing method based on channel estimation.)

1. A underwater sound FBMC communication signal detection method based on deep learning is characterized by comprising the following steps:

step 1: repeatedly testing the traditional underwater sound FBMC communication system to obtain a data set required by DNN training, dividing the data set into a training set and a testing set and carrying out data preprocessing;

step 2: determining various hyper-parameters of the DNN-FBMC system according to requirements, and initializing neuron parameters of various layers of DNN;

and step 3: inputting training set data, and calculating a predicted value of forward propagation of a current DNN output layer;

and 4, step 4: calculating a cost function of DNN, performing DNN back propagation according to the cost function, and updating neuron parameters of each layer;

and 5: circularly executing the step 3 and the step 4 to enable the DNN to reach the preset requirement of the signal detection error rate, finishing the training of the DNN when the cost function does not have a significant descending trend along with the increase of the iteration times, and stopping updating of all parameters to obtain a trained DNN model;

step 6: and (5) accessing the DNN model obtained in the step (5) to a system receiving end for transmitting signal recovery, inputting test set data, taking the obtained DNN forward propagation output value as a final predicted value of a transmitting signal, comparing the final predicted value with a real transmitting signal value, and calculating the error rate.

2. The underwater acoustic FBMC communication signal detection method based on deep learning of claim 1, wherein: the data set in step 1 comprises: original sending sequence x (n) of sending end of FBMC system and original complex sequence y of receiving end of FBMC system0(n); complex sequence y to FBMC receiving end0(n) carrying out data preprocessing, respectively extracting a real part and an imaginary part of the complex symbol, placing the imaginary part of the same symbol behind the real part of the complex symbol, and recombining the imaginary part and the imaginary part into a real number sequence y (n).

3. The underwater acoustic FBMC communication signal detection method based on deep learning of claim 1, wherein: the DNN-FBMC system training super-parameter setting in the step 2 is as follows:

the learning rate is set to be 0.01, the training set mini-batch is set to be 512, the test set mini-batch is set to be 512, the hidden layer activation function adopts a ReLU activation function, the output layer activation function adopts a Sigmoid activation function, the weight initialization method adopts a Heinization, the weight updating strategy is Adam, the L2 regularization parameter is 1.2, and the Dropout regularization parameter is 0.8.

4. The underwater acoustic FBMC communication signal detection method based on deep learning of claim 3, wherein: the He initialization is to keep the variance of input and output unchanged, and multiply the value of random initialization by a scaling factor

Figure FDA0002502908680000021

5. The underwater acoustic FBMC communication signal detection method based on deep learning of claim 4, wherein L2 regularization is added with an L2 regularization term about weight ω after an original cost function J (ω, b), so that weight is attenuated and generalization capability is improved; the concrete expression is as follows:

Figure FDA0002502908680000022

6. the deep learning based underwater acoustic FBMC communication signal detection method according to claim 1, wherein the formula for calculating DNN forward propagation in step 3 is as follows:

Figure FDA0002502908680000023

Figure FDA0002502908680000024

fReLU(z)=max(0,z) (4)

fSigmoid(z)=1/(1+e-z) (5)

in formulae (2) to (5), zi [l]Represents the input of the ith neuron of the l layer; a isi [l]Represents the output of the ith neuron of the l layer;representing the weight between the ith neuron of the l-th layer and all the neurons of the input layer, and the dimension is 1 × nl-1Indicating the bias of the ith neuron in the ith layer. n islIs the number of neurons in layer i. f. of[l]() The activation function representing the l-th layer is a nonlinear transformation between input and output, and common activation functions include a ReLU function and a Sigmoid function.

7. The deep learning based underwater acoustic FBMC communication signal detection method according to claim 1, wherein the formula for calculating the error value between the predicted value of the current DNN output and the actual sample supervision value in step 4 is as follows:

in the formula (6), a (i) represents a supervision value,

Figure RE-FDA0002627084540000032

8. The method as claimed in claim 1, wherein in step 5, the DNN training times are 2000 iterations, and the weight of each DNN neuron obtained when training is stopped is obtained

Figure FDA0002502908680000033

9. The deep learning based underwater acoustic FBMC communication signal detection method according to claim 8, wherein the DNN parameters obtained in step 6 are used for signal recovery according to the following formula:

Figure RE-FDA0002627084540000035

in the formula (8), a*Is the predicted value of the finally obtained sending signal and is combined with the actual value a of the sending signal[0]And calculating the error rate of the system.

10. The underwater acoustic FBMC communication signal detection method based on deep learning as claimed in claim 1, wherein the cost function in step 4 is a cross entropy function.

Technical Field

The invention relates to the technical field of underwater acoustic communication, in particular to an underwater acoustic Filter Bank Multi-Carrier (FBMC) communication signal detection method based on deep learning.

Background

The underwater acoustic channel is different from a wireless channel on land, has the characteristics of strong time-varying, frequency-varying and space-varying characteristics and multipath effects, narrow available bandwidth, serious signal attenuation and the like, is easily interfered by various factors, and limits the development of the underwater acoustic communication.

At present, the research on underwater acoustic communication at home and abroad mainly takes a multi-carrier modulation technology as a main research. Compared with the traditional OFDM, the FBMC does not need a cyclic prefix, has low out-of-band leakage and high spectral efficiency and has better time-frequency focusing characteristic, and meanwhile, due to the introduction of Offset Quadrature Amplitude Modulation (OQAM), the anti-interference performance of the system is greatly improved. However, the FBMC communication system only satisfies strict orthogonality in the real number domain, and inherent imaginary part interference exists, so that the channel estimation scheme in the OFDM system cannot be directly adopted, and the channel estimation effect is greatly influenced. In order to ensure the reliability of the system, channel estimation methods based on training sequences, pilot frequencies and the like are proposed in recent years, but the imaginary part interference problem is not fundamentally solved.

Disclosure of Invention

The invention provides an underwater sound signal detection method, which aims to solve the problems that in the prior art, a traditional underwater sound filter bank multi-carrier (FBMC) communication receiving end can recover to transmit symbols only through channel estimation and equalization, the system complexity is high, the channel estimation precision is poor and the like.

The invention provides an underwater acoustic signal detection method, which comprises the following steps:

step 1: repeatedly testing the traditional underwater sound FBMC communication system to obtain a data set required by DNN training, dividing the data set into a training set and a testing set and carrying out data preprocessing;

step 2: determining various hyper-parameters of the DNN-FBMC system according to requirements, and initializing neuron parameters of various layers of DNN;

and step 3: inputting training set data, and calculating a predicted value of forward propagation of a current DNN output layer;

and 4, step 4: calculating a cost function of DNN, and performing DNN back propagation to update neuron parameters of each layer according to the cost function, wherein the cost function adopts a cross entropy function;

and 5: circularly executing the step 3 to the step 4 to enable the DNN to reach the preset requirement of the signal detection error rate, finishing the training of the DNN when the cost function is not obviously reduced any more or reaches the minimum value, and stopping updating of all parameters to obtain a trained DNN model;

step 6: and (5) accessing the DNN model obtained in the step (5) to a system receiving end for transmitting signal recovery, inputting test set data, taking the obtained DNN forward propagation output value as a final predicted value of a transmitting signal, comparing the final predicted value with a real transmitting signal value, and calculating the error rate.

Further, the step 1 of the footage data set comprises: original sending sequence x (n) of FBMC system sending end and original complex sequence y of FBMC system receiving end0(n) of (a). Since DNN is easier to process real number domain data, complex sequence y of FBMC receiving end needs to be processed for effective work0(n) performing data preprocessing. And respectively extracting a real part and an imaginary part of the complex symbol, placing the imaginary part of the same symbol behind the real part of the complex symbol, and recombining the real part and the imaginary part into a real sequence y (n). Record 330000 groups x (n), y (n) form a data set.

Further, the DNN-FBMC system training hyper-parameter setting in step 2 is as follows:

the learning rate is set to be 0.01, the training set mini-batch is set to be 512, the test set mini-batch is set to be 512, the hidden layer activation function adopts a ReLU activation function, the output layer activation function adopts a Sigmoid activation function, the weight initialization method adopts a Heinization, the weight updating strategy is Adam, the L2 regularization parameter is 1.2, and the Dropout regularization parameter is 0.8.

Wherein He initialization is to keep the variance of input and output unchanged, and multiply the value of random initialization by a scaling factor(layersdims[l-1]Indicating the size of the previous layer) so that the ReLU output probability distribution works better. The Adam optimization algorithm can be regarded as a combination of Momentum and RMSProp algorithms, can be converged quickly and learned correctly, and minimizes the loss function to the greatest extent. The L2 regularization adds an L2 regularization term related to the weight omega after the original cost function J (omega, b), so that the weight is attenuated, and the generalization capability is improved. The concrete expression is as follows:

Figure RE-GDA0002627084550000032

dropout regularization eliminates some nodes by setting retention probabilities of the neuron nodes, resulting in a smaller scale network.

Further, the formula for calculating DNN forward propagation in step 3 is as follows:

Figure RE-GDA0002627084550000033

Figure RE-GDA0002627084550000034

fReLU(z)=max(0,z) (4)

fSigmoid(z)=1/(1+e-z) (5)

in the formulae (2) to (5),

Figure RE-GDA0002627084550000035

represents the input of the ith neuron of the l layer;represents the output of the ith neuron of the l layer;representing the weight between the ith neuron of the l-th layer and all the neurons of the input layer, and the dimension is 1 × nl-1

Figure RE-GDA0002627084550000038

Indicating the bias of the ith neuron in the ith layer. n islIs the number of neurons in layer i. f. of[l]() The activation function representing the l-th layer is a nonlinear transformation between input and output, and common activation functions include a ReLU function and a Sigmoid function.

It can be seen from equations (2) to (5) that the output value of the output layer (i-th layer) neuron is the final predicted value of DNN, and can be regarded as input data a[0]The whole forward propagation process can be expressed as:

in the formula (6), a[0]A neuron value representing an input layer; ω and b represent the weight and bias between all neurons in the network, respectively, which can be easily found to be the main parameters affecting the performance of the whole network. Therefore, the weight and the bias are continuously optimized by using a huge training set, and the network can output an ideal predicted value.

Further, the formula for calculating the error value between the predicted value of the current DNN output and the actual sample supervision value in step 4 is as follows:

in the formula (7), a (i) represents a supervision value,

Figure RE-GDA0002627084550000042

the predicted value of the output is shown, and m represents the number of predicted symbols. Each layer can be enabled by constraining the cost functionThe neuron weight and the bias are continuously updated in the process of back propagation, so that the predicted value is continuously close to the supervision value, and the purpose of recovering the transmitted symbol is achieved.

Further, in step 5, when the number of experimental iterations reaches 2000, the cost function is not significantly reduced (tends to be stable), so that the iteration standard for completing training of the experimental DNN is set as 2000 iterations, and the weight of each neuron of the DNN obtained when training is stopped is set as the weight of each neuron of the DNN

Figure RE-GDA0002627084550000043

And biasAre respectively marked as omega*And b*

Further, the formula for performing signal recovery in step 6 by using the DNN parameter obtained in step 5 is as follows:

Figure RE-GDA0002627084550000046

in the formula (9), a*Is the final resulting transmitted signal prediction value. Combining the actual values a of the transmitted signals[0]And calculating the error rate of the system.

The invention has the beneficial effects that:

the underwater sound signal detection method based on deep learning replaces the original processes of channel estimation, equalization and the like with a DNN which is well trained at the receiving end of the traditional underwater sound FBMC communication system. And acquiring underwater acoustic channel state information by utilizing a DNN training stage, and realizing demodulation and recovery of signals in a testing stage. On the basis, the invention also introduces an Adam weight updating strategy and an L2 regularization method to optimize the DNN model, further improves the convergence efficiency and the estimation precision of the DNN, and has certain superiority in the aspects of precision and complexity compared with the existing method based on channel estimation.

Drawings

The features and advantages of the present invention will be more clearly understood by reference to the accompanying drawings, in which:

FIG. 1 is a flow chart of an underwater acoustic signal detection method according to the present invention;

FIG. 2 is a block diagram of an underwater acoustic FBMC communication system based on deep learning according to the present invention;

FIG. 3 is a table of the system computation complexity comparison in an embodiment of the present invention;

FIG. 4 is a graph illustrating the impact of iteration count on the performance of the bit error rate of the system according to an embodiment of the present invention;

FIG. 5 is a diagram illustrating a comparison of bit error rate performance of a small training sample number system in accordance with an embodiment of the present invention;

FIG. 6 is a diagram illustrating the comparison of the bit error rate performance of a system with a large training sample number according to an embodiment of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

As shown in fig. 1, an embodiment of the present invention provides a method for detecting an underwater acoustic signal. The method replaces modules such as channel estimation and equalization in the receiving end of the traditional underwater sound FBMC communication system with the trained DNN, breaks through the modularization limit of the system, adaptively learns the state information of the underwater sound channel, avoids the inherent imaginary part interference influence of the original system, and improves the error rate performance of the system. The method comprises the following specific steps:

step 1: repeatedly testing the traditional underwater sound FBMC communication system to obtain a sufficient data set required by DNN training, and specifically comprising the following steps: original sending sequence x (n) of FBMC system sending end and original complex sequence y of FBMC system receiving end0(n) of (a). Since DNN is easier to process real data, for effective operation, the receiving end of FBMC needs to be complex orderColumn y0(n) performing data preprocessing. And respectively extracting a real part and an imaginary part of the complex symbol, placing the imaginary part of the same symbol behind the real part of the complex symbol, and recombining the real part and the imaginary part into a real sequence y (n). Record 330000 groups x (n), y (n) form a data set.

Step 2: the DNN-FBMC system training hyper-parameters are set as follows: the learning rate is set to be 0.01, the training set mini-batch is set to be 512, the test set mini-batch is set to be 512, the hidden layer activation function adopts a ReLU activation function, the output layer activation function adopts a Sigmoid activation function, the weight initialization method adopts He initialization, the weight updating strategy is Adam, the L2 regularization parameter is 1.2, and the Dropout regularization parameter is 0.8.

Wherein He initialization is to keep the variance of input and output unchanged, and multiply the value of random initialization by a scaling factor(layersdims[l-1]Indicating the size of the previous layer) so that the ReLU output probability distribution works better. The Adam optimization algorithm can be regarded as a combination of Momentum and RMSProp algorithms, can be converged quickly and learned correctly, and minimizes the loss function to the greatest extent. The L2 regularization adds an L2 regularization term related to the weight omega after the original cost function J (omega, b), so that the weight is attenuated, and the generalization capability is improved. The concrete expression is as follows:

Figure RE-GDA0002627084550000062

dropout regularization eliminates some nodes by setting retention probabilities of the neuron nodes, resulting in a smaller scale network.

And step 3: inputting training set data, and calculating a predicted value of forward propagation of the current DNN output layer, wherein the formula is as follows:

Figure RE-GDA0002627084550000063

Figure RE-GDA0002627084550000064

fReLU(z)=max(0,z)

fSigmoid(z)=1/(1+e-z)

wherein the content of the first and second substances,

Figure RE-GDA0002627084550000065

represents the input of the ith neuron of the l layer;represents the output of the ith neuron of the l layer;

Figure RE-GDA0002627084550000067

representing the weight between the ith neuron of the l-th layer and all the neurons of the input layer, and the dimension is 1 × nl-1Indicating the bias of the ith neuron in the ith layer. n islIs the number of neurons in layer i. f. of[l]() The activation function representing the l-th layer is a nonlinear transformation between input and output, and common activation functions include a ReLU function and a Sigmoid function.

From the above equation, it can be seen that the output value of the output layer (i-th layer) neuron is the final predicted value of DNN, which can be regarded as the input data a[0]The whole forward propagation process can be expressed as:

Figure RE-GDA0002627084550000069

wherein, a[0]A neuron value representing an input layer; ω and b represent the weight and bias between all neurons in the network, respectively, which can be easily found to be the main parameters affecting the performance of the whole network. Therefore, the weight and the bias are continuously optimized by using a huge training set, and the network can output an ideal predicted value.

And 4, step 4: the formula for calculating the error value between the predicted value output by the current DNN and the actual sample supervision value is as follows:

Figure RE-GDA0002627084550000071

wherein a (i) represents a supervisory value,the predicted value of the output is shown, and m represents the number of predicted symbols. Through the constraint cost function, the neuron weight values and the bias of each layer can be continuously updated in the process of back propagation, so that the predicted values are continuously close to the supervision values, and the purpose of recovering the transmitted symbols is achieved;

and 5: and (5) circularly executing the step 3 to the step 4 to enable the DNN to reach the preset requirement of the signal detection error rate. When the number of experimental iterations reaches 2000, the cost function is not obviously reduced (tends to be stable), so the iteration standard of the DNN training completion of the experiment is set as 2000 iterations, and the weight value of each neuron of the DNN obtained when the training stops is set asAnd biasAre respectively marked as omega*And b*

Step 6: accessing the DNN model obtained in the step 5 to a system receiving end for transmitting signal recovery, inputting test set data, and taking the obtained DNN forward propagation output value as a final predicted value of the transmitting signal, wherein the specific formula is as follows:

wherein, a*Is the final resulting transmitted signal prediction value. Combining the actual values a of the transmitted signals[0]And calculating the error rate of the system.

As shown in fig. 2, the transmitting end of the underwater acoustic DNN-FBMC system is consistent with the transmitting end of the conventional FBMC system, and the DNN structure is used to replace the channel estimation, equalization and demapping modules at the receiving end. The whole signal detection process is divided into a training phase and a testing phase.

In the training phase, the transmitted symbols are randomly generated binary sequences, and at the receiving end, an unbalanced complex sequence is formed. And directly transmitting the complex sequence and the sending symbol to a DNN model, and respectively taking the complex sequence and the sending symbol as supervision values of the input layer neuron value and the output layer neuron prediction result to form a training sample of a training set. The above process is repeated until the training set has sufficient training samples. And on a DNN output layer, measuring the difference between a DNN predicted value and a monitoring value by using a cost function, finishing the training of the DNN when the cost function reaches the minimum value, and stopping updating the weight and the bias of each neuron. In the testing stage, the received complex sequence is directly predicted by the trained DNN, so that the transmitted symbol can be recovered.

Fig. 3 compares the computational complexity (number of multiplications per execution) of DNN-FBMC with the conventional channel estimation algorithm. The DNN is formed by a 5-layer fully-connected network, and only simple multiply-add operation exists among layers, so that the calculation complexity and the LS channel estimation algorithm are in the same order of magnitude, and the method is mainly embodied in the iterative process of a training stage.

Fig. 4 is an error rate performance curve of the DNN-FBMC system (L2 regularization) under different training iteration times, and as the iteration times increase, the weight and bias of the neural network are continuously updated, and the error rate performance of the system becomes better and better. However, as the space available for learning by the samples in the late stage of training becomes smaller and smaller, the performance gain brought by a single iteration will be continuously reduced along with the increase of the number of iterations.

When N is 110000, the ratio of 9: 1, dividing a training set and a testing set to obtain an error rate curve. When N is 330000, the ratio is 29: 1, dividing a training set and a testing set to obtain an error rate curve. Simulation results show that the error rate performance of the proposed signal detection method is obviously better than that of an FBMC communication system using the traditional LS channel estimation algorithm. And the error rate performance of the L2 regularization optimization algorithm is better than that of the Dropout regularization algorithm under the same condition. Comparing fig. 4 and fig. 5, it can be seen that the error rate performance of DNN-FBMC can be effectively improved by increasing the number of training set samples on the premise that the data volume of the test set is not changed.

Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种5GNR系统中ZC序列DFT运算的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!