Channel machine learning estimation method based on intelligent iterative initial value selection

文档序号:750550 发布日期:2021-04-02 浏览:14次 中文

阅读说明:本技术 一种基于智能迭代初值选择的信道机器学习估计方法 (Channel machine learning estimation method based on intelligent iterative initial value selection ) 是由 潘甦 陈晨阳 于 2020-12-04 设计创作,主要内容包括:本发明是一种基于智能迭代初值选择的信道机器学习估计算法,主要的步骤如下:(1)使用混合高斯模型(GMM)对信道的概率模型进行建模;(2)使用最优贝叶斯参数估计进行信道估计;(3)使用改进的K-means算法来确定迭代过程的初值;(4)利用近似消息传递算法(AMP)来求解步骤(2)中所涉及到的边缘概率密度函数;(5)使用期望最大算法(EM)迭代求解高斯混合模型中的参数。本发明充分利用了信道的样本信息,对其进行聚类,取代期望最大算法(EM)的部分迭代过程来确保迭代更快的收敛速度和更好的MSE性能,同时根据波束域信道增益的稀疏特性,采用最优贝叶斯参数估计算法对信道进行估计。(The invention relates to a channel machine learning estimation algorithm based on intelligent iterative initial value selection, which mainly comprises the following steps: (1) modeling a probabilistic model of the channel using a hybrid Gaussian model (GMM); (2) performing channel estimation by using the optimal Bayesian parameter estimation; (3) determining an initial value of an iterative process by using a modified K-means algorithm; (4) solving the marginal probability density function involved in step (2) using an approximate message passing Algorithm (AMP); (5) parameters in the gaussian mixture model are iteratively solved using an expected maximum algorithm (EM). The invention fully utilizes the sample information of the channel, clusters the sample information, replaces part of the iterative process of the Expectation Maximization (EM) algorithm to ensure faster convergence rate and better MSE performance of the iteration, and adopts the optimal Bayesian parameter estimation algorithm to estimate the channel according to the sparse characteristic of the beam domain channel gain.)

1. A channel machine learning estimation method based on intelligent iterative initial value selection is characterized in that: the method comprises the following steps:

modeling a probability model of a channel by using a Gaussian mixture model;

secondly, channel estimation is carried out by using the optimal Bayesian parameter estimation;

determining an initial iteration value of the Gaussian mixture model by using an improved K-means clustering algorithm;

solving the marginal probability density function in the second step by using an approximate message transfer Algorithm (AMP);

and step five, iteratively solving parameters in the Gaussian mixture model by using an expected maximum algorithm.

2. The channel machine learning estimation method based on intelligent iterative initial value selection as claimed in claim 1, wherein: in the first step, an algorithm for performing channel estimation on an uplink of the massive MIMO system by using a gaussian mixture model is based on the sparse characteristic of channel gain in a beam domain, and is modeled as follows:

whereinExpressed are mean 0 and varianceA gaussian probability density function of;

the prior parameters are represented; ρ n, l represents a weighting coefficient, which is a mixed probability of the 1 st single gaussian distribution model, and satisfies:

3. the channel machine learning estimation method based on intelligent iterative initial value selection as claimed in claim 1, wherein: in step two, the mean square error is minimized, and the optimal Bayesian parameter estimation is as follows:

wherein the content of the first and second substances,

represents the posterior probability P (of CK variables)h n|y n) The edge probability density function of the kth variable is obtained by using a Bayesian total probability formula:

under the condition that the pilot information and the channel state information are known, the probability P: (y n|h n) The uncertainty of (2) is the uncertainty of the channel noise, and the noise of different pilots is independent of each other, so

Then

Wherein Z ═ P: (y n) Is a normalized constant, andh nnothing is done just to ensure that the integral is 1.

4. The channel machine learning estimation method based on intelligent iterative initial value selection as claimed in claim 1, wherein: in step three, the sample information of the channel is preprocessed by using a modified K-means algorithm, so as to obtain an initial value more suitable for the iteration of the Gaussian mixture model: it is assumed that the channel noise is not affected, andh n=S-1 y neach element in the data set is regarded as a single category, a sample is randomly selected from the data set as an initial clustering center, the Euclidean distance between the category point and all current elements is calculated, and the probability that each category is selected as the next clustering center is calculatedFinally, selecting the next clustering center according to the roulette, repeating the iteration process until the final number of the formed data categories is L,

the iterative initial value determination method of the Gaussian mixture model comprises the following steps: (1) calculating the ratio of the number of channel samples in each category to the number of all channel samples, and taking the ratio as the mixing probability rho of the GMMn,rAn initial value of (1); (2) taking the variance of each channel sample number in each class as the variance of GMMAn initial value of (1); (3) will be provided withh n=S-1 y nEach element in (1) as muk,nIs started.

5. The channel machine learning estimation method based on intelligent iterative initial value selection as claimed in claim 1, wherein: in step four, the edge probability density function Q in step two is solved by adopting an approximate message transfer algorithm (Q:)h k,n) Characterized by an edge probability density function Q: (h k,n) Computationally intractable, the use of an approximate message passing Algorithm (AMP) can reduce the computational complexity of the algorithm:

formula (6) is written as

According to the factor graph theory

Qk(h k,n)=Ql→k(h k,n)Qk→l(h k,n) (8)

Wherein

Wherein

By substituting the formulae (9) and (10) for the formula (8)

Temporarily not taking into accounth k,nThe mean and variance of equation (14) are respectively

Defining target estimates

Substituting the formulas (11) and (12) into the formula (16) to obtain the final product

Wherein

U found abovek,nAnd Vk,nIs in the condition that prior information P (is not considered)h k,n) The mean and variance obtained under the conditions are now consideredh k,nA priori information ofh k,nIs that

Therefore, it is not only easy to useh k,nTarget estimate mean of

h k,nIs estimated as

Therefore, it is not only easy to useh k,nTarget estimate variance of

vk,n=f-|μk,n|2 (22)。

6. The channel machine learning estimation method based on intelligent iterative initial value selection as claimed in claim 1, wherein: in the fifth step, a desired maximum algorithm is adopted to iteratively solve Gaussian Mixture Model (GMM) parameters: based on the AMP algorithmh k,nA posteriori probability of

Order to

Then the parameter update of the Gaussian Mixture Model (GMM) can be written as

While the noise variance ΔnWhen unknown, it can be updated by EM algorithm

Technical Field

The invention belongs to the technical field of wireless communication, in particular to a channel machine learning algorithm based on intelligent iterative initial value selection, which adopts a machine learning related algorithm to improve a Bayes-GMM channel estimation algorithm to estimate a channel based on the sparse characteristic of channel gain in a beam domain.

Background

With the development of informatization of the society at present, the development of wireless communication is becoming faster and faster. From the first generation mobile communication system (1G) to the fifth generation mobile communication system (5G) which has great development potential, the quality and the speed of communication are getting higher and better, but the traditional technology can not keep up with the demand of people for mobile communication services, so that the massive Multiple Input Multiple Output (MIMO) technology, one of the core technologies of 5G, becomes a research hotspot at present. The number of antenna arrays in a large-scale multiple-input multiple-output system reaches dozens or even hundreds, and is far larger than that of the traditional MIMO system, and the antenna arrays can enable the degree of freedom of a wireless channel to be higher, so that higher stability and faster transmission rate can be obtained.

When the receiver is used for demodulating an original signal, the reasonability of the receiver design must be guaranteed, and the reasonability is realized that the coherent detection of the receiver needs channel state information. However, shadow fading and frequency selective fading in wireless channels can increase the randomness of the channel, which can cause difficulties in the design of the receiver. In order to improve such a situation, channel estimation techniques are particularly important, and therefore, the research of channel estimation algorithms is also significant.

Channel estimation techniques are generally divided into three categories: pilot-based channel estimation; estimating a blind channel; and (4) semi-blind channel estimation. As conventional pilot-based channel estimation algorithms, there are Least Squares (LS) -based estimation and Minimum Mean Square Error (MMSE) estimation. The LS channel estimation algorithm has the advantage that the algorithm is very simple, but the estimation algorithm has a relatively large estimation error because the algorithm ignores the influence caused by noise. The MMSE channel estimation algorithm improves the disadvantages of the LS channel estimation algorithm, takes the influence of noise into consideration, and can obtain better performance, but the requirement of hardware is increased due to larger calculation amount. At present, there are some improvements to the LS channel estimation algorithm and the MMSE channel estimation algorithm, such as the LMMSE channel estimation algorithm with low complexity and good performance proposed in 2006 by M Noh, Y Lee, H Park and the discrete fourier transform-based channel estimation algorithm proposed in 2007 by Y Kang, K Kim, H Park.

Machine learning is a data analysis method that obtains the implicit model and regularity of data from data in an iterative manner, and the purpose of a channel estimation algorithm is to estimate the distribution model to which the channel gain conforms from limited channel samples. Therefore, the machine learning algorithm and the channel estimation algorithm can be combined together, so that the channel estimation obtains better performance. In the current research of channel estimation by using machine learning, only one channel estimation algorithm based on Bayes-GMM is available, the channel estimation algorithm can obtain good Mean Square Error (MSE) performance, but an average selection mode is only adopted when processing the initial iteration value in a Gaussian Mixture Model (GMM), and the convergence speed of the channel estimation algorithm is very slow due to the selection mode, and the channel sample information is not fully utilized.

Disclosure of Invention

Aiming at the problems, the invention provides a channel machine learning algorithm based on intelligent iteration initial value selection, an improved K-means algorithm is adopted to determine an iteration initial value when a GMM iteration initial value is selected, and the convergence performance and the MSE performance of the algorithm are improved on the premise of fully utilizing channel sample information.

In order to achieve the purpose, the invention is realized by the following technical scheme:

the invention uses a probability model which uses a Gaussian Mixture Model (GMM) to approximate the channel gains of all users to the same antenna at the base station side to replace the traditional modeling method of a single Gaussian model according to the display of received signals on a beam domain, thus reducing errors and simultaneously being more consistent with the distribution condition of channels in a large-scale MIMO system, for the update of weighting coefficients and parameters in the GMM algorithm, an expectation-maximization (EM) algorithm is adopted, the estimation of the channels is the optimal Bayesian estimation which is adopted according to the sparse characteristic of the channel gains on the beam domain, the solved posterior probability is used for replacing the prior probability by using the given prior probability and Bayesian theorem, thus the parameter estimation with the minimum Mean Square Error (MSE) can be obtained, the learning and predicting efficiency of the Bayesian parameter estimation algorithm is high, and simultaneously, the computation complexity is not small, the computational complexity raises the problem of what the edge probability density function of the posterior probability is a complex integral, so that the Approximate Message Passing (AMP) algorithm is considered to be adopted to achieve the purpose of reducing the computational complexity. When the iterative initial value is determined, an improved K-means clustering algorithm is adopted, the clustering result difference caused by the dependence of the traditional K-means algorithm on the initial centroid is overcome, the sample information of a channel is fully utilized, a GMM parameter close to the real condition is obtained before the iteration is started, and is used as the iterative initial value of the GMM and then more detailed iteration is carried out, so that the convergence and the MSE performance of the channel estimation algorithm can be improved, and the calculation amount of the clustering algorithm is smaller than that of the EM algorithm, so that the iteration times of the EM algorithm are reduced, and the algorithm complexity of the method is reduced on the whole.

Specifically, a channel machine learning estimation method based on intelligent iterative initial value selection includes the following steps:

modeling a probability model of a channel by using a Gaussian mixture model;

secondly, channel estimation is carried out by using the optimal Bayesian parameter estimation;

determining an initial iteration value of the Gaussian mixture model by using an improved K-means clustering algorithm;

solving the marginal probability density function in the second step by using an approximate message transfer Algorithm (AMP);

and step five, iteratively solving parameters in the Gaussian mixture model by using an expected maximum algorithm.

The invention is further improved in that: in the first step, an algorithm for performing channel estimation on an uplink of the massive MIMO system by using a gaussian mixture model is based on the sparse characteristic of channel gain in a beam domain, and is modeled as follows:

whereinExpressed are mean 0 and varianceA gaussian probability density function of;

the prior parameters are represented; ρ n, l represents a weighting coefficient, which is a mixed probability of the 1 st single gaussian distribution model, and satisfies:

the invention is further improved in that: in step two, the mean square error is minimized, and the optimal Bayesian parameter estimation is as follows:

wherein the content of the first and second substances,

represents the posterior probability P (of CK variables)h n|y n) The edge probability density function of the kth variable is obtained by using a Bayesian total probability formula:

under the condition that the pilot information and the channel state information are known, the probability P: (y n|h n) The uncertainty of (2) is the uncertainty of the channel noise, and the noise of different pilots is independent of each other, so

Then

Wherein Z ═ P: (y n) Is a normalized constant, andh nnothing is done just to ensure that the integral is 1.

The invention is further improved in that: in step three, the sample information of the channel is preprocessed by using a modified K-means algorithm, so as to obtain an initial value more suitable for the iteration of the Gaussian mixture model: it is assumed that the channel noise is not affected, andh n=S-1 y neach element in the data set is regarded as a single category, a sample is randomly selected from the data set as an initial clustering center, the Euclidean distance between the category point and all current elements is calculated, and the probability that each category is selected as the next clustering center is calculatedFinally, selecting the next clustering center according to the roulette, repeating the iteration process until the final number of the formed data categories is L,

the iterative initial value determination method of the Gaussian mixture model comprises the following steps: (1) calculating the ratio of the number of channel samples in each category to the number of all channel samples, and taking the ratio as the mixing probability rho of the GMMn,rAn initial value of (1); (2) taking the variance of each channel sample number in each class as the variance of GMMAn initial value of (1); (3) will be provided withh n=S-1 y nEach element in (1) as muk,nIs started.

The invention is further improved in that: in step four, the edge probability density function Q in step two is solved by adopting an approximate message transfer algorithm (Q:)h k,n) Its special featureCharacterized by an edge probability density function Q: (h k,n) Computationally intractable, the use of an approximate message passing Algorithm (AMP) can reduce the computational complexity of the algorithm:

formula (6) is written as

According to the factor graph theory

Qk(h k,n)=Ql→k(h k,n)Qk→l(h k,n) (8)

Wherein

Wherein

By substituting the formulae (10) and (11) for the formula (9)

Temporarily not taking into accounth k,nA priori information andnormalized to a constant, the mean and variance of equation (14) are each

Defining target estimates

Substituting the formulas (12) and (13) into the formula (16) for arrangement

Wherein

U found abovek,nAnd Vk,nIs in the condition that prior information P (is not considered)h k,n) The mean and variance obtained under the conditions are now consideredh k,nA priori information ofh k,nIs that

Therefore, it is not only easy to useh k,nTarget estimate mean of

h k,nIs estimated as

Therefore, it is not only easy to useh k,nTarget estimate variance of

vk,n=f-|μk,n|2 (22)。

The invention is further improved in that: in the fifth step, a desired maximum algorithm is adopted to iteratively solve Gaussian Mixture Model (GMM) parameters: based on the AMP algorithmh k,nA posteriori probability of

Order to

Then the parameter update of the Gaussian Mixture Model (GMM) can be written as

While the noise variance ΔnWhen unknown, it can be updated by EM algorithm

The invention has the beneficial effects that:

(1) the characteristics of channels and channel gains are fully considered, GMM and an optimal Bayesian parameter estimation algorithm are adopted, and better Mean Square Error (MSE) performance than that of the traditional channel estimation algorithm can be obtained on the premise of not needing to reach channel statistical information in advance;

(2) the channel sample information is fully utilized, an improved K-means algorithm is used for determining an iteration initial value, the convergence performance of the algorithm is improved, and meanwhile the calculation complexity of the algorithm is reduced.

Drawings

FIG. 1 is a flow chart of the method of the present invention.

Detailed Description

In the following description, for purposes of explanation, numerous implementation details are set forth in order to provide a thorough understanding of the embodiments of the invention. It should be understood, however, that these implementation details are not to be interpreted as limiting the invention. That is, in some embodiments of the invention, such implementation details are not necessary. In addition, some conventional structures and components are shown in simplified schematic form in the drawings.

As shown in FIG. 1, the invention relates to a channel machine learning estimation method based on intelligent iterative initial value selection, which adopts a Gaussian mixture model to model a channel, estimates the channel by optimal Bayesian parameter estimation, and determines an iterative initial value by an improved K-means algorithm, wherein the method comprises the following steps:

modeling a probability model of a channel by using a Gaussian Mixture Model (GMM);

consider a large-scale MIMO system having C hexagonal cells, K user devices and 1 base station in each cell, the base station being located at the center of the cell, each user device being equipped with a single antenna, and the base station being equipped with N antennas spaced apart by half a wavelength. When performing channel estimation, each user equipment simultaneously transmits a pilot sequence with a length of L, and L ═ K is satisfied, and the pilot sequence is generally divided into two forms of orthogonal and random. Assuming that the first cell is the target cell during channel estimation, i.e. the effective signal is the pilot sequence sent by the user equipment in the first cell, the pilot sequences sent by users in other cells are interference signals, and the pilot sequences sent by the user equipment in each cell are represented by the matrix ScIs expressed in the form of ScIs a matrix of LXK, c denotes the c-th cell, and likewiseChannel vectors of all user equipments in each cell to the base station of the target cell in matrix Hc=[hc1,hc2,...,hck]HIs expressed in the form of a matrix HcIs a matrix of K X N, c denotes the c-th cell, matrix HcElement h in (1)ckIs a vector N X1, representing the channel vector from the kth ue in the c-th cell to the bs in the target cell, and expressed by the pilot sequence and the channel vector, the signal Y received by the bs in the first cell can be expressed as:

wherein S ═ S1,S2,...SC],Z is complex Gaussian additive white noise, the mean is 0 and the variance is Δ.

The guarantee of sparsity is a precondition for applying a compressed sensing technique, for the channel vector h mentioned aboveckAccording to a typical cellular configuration, this can be expressed as:

wherein VckExpressed is that the mean is 0 and the covariance matrix is INAnd subject to a complex gaussian distribution, RckRepresented by a semi-positive definite covariance matrix, which may be formed from

Rck=∫Aa(θ)aH(θ)p(θ)dθ (3)

Determining, where a (θ) represents a direction vector of a Uniform Linear Array (Uniform Linear Array), p (θ) represents a channel Power Angle Spread (Power Azimuth Spread), θ represents a Power Arrival Angle (Angle of Arrival) of a signal, and θ has a value range of [ -pi/2, pi/2 ], and when an antenna spacing is set to a half wavelength, a (θ) may be expressed as:

and when the number of the antennas is infinite, the distance between theta and n is largerSuch a relationship, R can be inferredckThe eigenvector matrix of (A) is a DFT matrix of corresponding size, namely RckCan be expressed as:

Rck=FΛckFH (5)

wherein ^ sckIs formed by a channel covariance matrix RckIs a diagonal matrix formed by eigenvalues ofckIs p (theta), according to the relationship existing between theta and n above, then the matrix Λ isckThe CK diagonal element of

In the invention, the Laplace distribution model is adopted for the power angle diffusion of the channel, which is a typical outdoor propagation model, and p (theta) is expressed as:

wherein sigmaASShown is angular Spread (Azimuth Spread),the average signal power arrival angle is shown.

To hckBy discrete Fourier transform, h can be obtainedckTransforming into the beam domain, obtaining its form in the beam domain:

transform equation (1) to the beam domain by discrete DFT transform, i.e.:

Y=YF=SHF+ZF=SH+Z (8)

whereinYHAndZrespectively representing the beam domain forms of a received signal matrix, a channel gain matrix and a Gaussian white noise matrix at the base station side.

In the model of the invention, the number of base station side antennas is N, which represents N beams, and then on the nth beam, the base station side received signal vector can be represented as:

y n=Sh n+z n (9)

whereiny nh nAndz nare respectivelyYHAndZthe nth column of (1) represents a base station measured received signal vector, a channel gain vector and a Gaussian white noise vector of the nth beam, and meanwhile, because the model of the invention is a multi-cell model and the pilot sequence S is a random pilot sequence, under normal conditions, the variances of the amplified white noise in different beams are approximately equal, namely the received signal vector, the channel gain vector and the Gaussian white noise vector are equalz nVariance Δ ofn≈△。

As can be inferred from the beam domain form of equation (8),h n=[h 1,n,……,h CK,n]each element of (1)h k,nAre all complex gaussian random variables, andh nhave different variances, so a Gaussian mixture model pair can be usedh k,nThe probability distribution of (a):

whereinRepresents a mean of 0 and a variance ofThe one-dimensional complex gaussian probability density function of (a),

ρn,lis the weighting coefficient of the first Gaussian mixture component and satisfies the requirement of Sigma rhon,lSuppose that 1 ish nAre independent of each other, then

Secondly, channel estimation is carried out by using the optimal Bayesian parameter estimation;

in the model of the present invention, the Minimum Mean Square Error (MMSE) criterion is selected as the loss function of the bayesian estimation, and then the loss function is defined as:

whereinRepresentative is an estimate of the channel gain.

Bayesian risk can be obtained by expecting a loss function in Bayesian estimation, and the Bayesian risk is recorded as Bayesian riskIn the loss functionIs formed by receiving signalsy nDetermined with the pilot sequence S, which is a constant, the loss function is related to the channel gainh k,nAnd receiving the signaly nThe expectation of which is the double integral of these two variables, i.e. the bayes riskExpressed as:

minimizing Bayesian risk can be obtainedh k,nC (c) in the Bayesian risk function in the actual situationh k,n) And (a)h k,ny n) For any channel vectorh k,nAre all non-negative, so the bayesian risk is minimized, i.e. the following equation:

cR=∫c(h k,n)P(h k,n|y n)dh k,n (14)

for any oneh k,nThe variables all take the minimum value, i.e.:

therefore, it is not only easy to use

The above equation is the optimal bayesian estimation in the bayesian estimation, and is also the optimal bayesian estimation of the beam domain channel gain.

The above equation is further processed and written as:

wherein

The representation is the final beam domain channel gainThe mathematical meaning of the probability density function of the optimal Bayesian estimation is the posterior probability P (containing CK variables)h n|y n) The edge probability density function of the kth variable. Applying Bayes total probability formula to obtain the posterior probability P: (h n|y n) Another expression form of (a):

now to P: (y n|h n) After deep analysis, the pilot sequence S is a constant, if the beam domain channel gain is knownh nIf so, then P: (y n|h n) Is the uncertainty of the channel noise, then P: (y n|h n) Can be expressed as:

then P: (y n|h n) It can be expressed as:

whereinIs andh nan irrelevant one of the normalization constants is,but only to ensure an integral of 1.

Determining an initial iteration value of the Gaussian mixture model by using an improved K-means clustering algorithm;

it is assumed that the channel noise is not affected, andh n=S-1 y neach element in the data set is regarded as an independent category, firstly, a sample is randomly selected from the data set as an initial clustering center, then, the Euclidean distance between the category point and all current elements, namely the correlation between the categories is calculated, and the calculation expression is as follows:

and calculating the probability of each category being selected as the next cluster centerAnd finally, selecting the next clustering center according to the roulette, and repeating the iteration process until the final number of the formed data categories is L.

The GMM iteration initial value determination method comprises the following steps: (1) calculating the ratio of the number of channel samples in each category to the number of all channel samples, and taking the ratio as the mixing probability rho of the GMMn,rAn initial value of (1); (2) taking the variance of each channel sample number in each class as the variance of GMMAn initial value of (1); (3) will be provided withh n=S-1 y nEach element in (1) as muk,nIs started.

Solving the marginal probability density function in the second step by using an approximate message transfer Algorithm (AMP)

Counting;

solving the marginal probability density function Q in the second step by adopting an approximate message transfer algorithm (Q:)h k,n) Edge probability density function Q: (h k,n) Computationally intractable, using approximate message passing algorithms(AMP) can reduce the computational complexity of the algorithm:

formula (20) is written as

According to the factor graph theory

Qk(h k,n)=Ql→k(h k,n)Qk→l(h k,n) (25)

Wherein, according to the complex form of Hubbard-Stratoovich transformation and the related message transmission algorithm, the method can obtain

Wherein

By substituting the formulae (26) and (27) for the formula (25)

Temporarily not taking into accounth k,nA priori information of (a) and a normalization constant, equation (3)1) Respectively of mean and variance of

Defining target estimates

Substituting the formulas (28) and (29) into the formula (32) to obtain the final product

Wherein

U found abovek,nAnd Vk,nIs in the condition that prior information P (is not considered)h k,n) The mean and variance obtained under the conditions are now consideredh k,nA priori information ofh k,nIs that

Therefore, it is not only easy to useh k,nTarget estimate mean of

h k,nIs estimated as

Therefore, it is not only easy to useh k,nTarget estimate variance of

vk,n=f-|μk,n|2 (39)。

Using an Expectation Maximization (EM) algorithm to iteratively solve parameters in the Gaussian mixture model: based on the AMP algorithmh k,nA posteriori probability of

Order to

The parameter update of the gaussian mixture model GMM can be written as

While the noise variance ΔnWhen unknown, it can be updated by EM algorithm

In summary, the complete algorithm of the present invention is as follows:

the invention fully utilizes the sample information of the channel, clusters the sample information and replaces part of the iterative process of the expectation maximum algorithm (EM) to ensure faster convergence rate and better MSE performance of the iteration. And simultaneously, estimating the channel by adopting an optimal Bayesian parameter estimation algorithm according to the sparse characteristic of the beam domain channel gain.

The above description is only an embodiment of the present invention, and is not intended to limit the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于数据驱动神经网络的OFDM信道估计与信号检测方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类