Blind-field-self-adaptive-based electronic nose drift compensation method

文档序号:566395 发布日期:2021-05-18 浏览:2次 中文

阅读说明:本技术 一种基于盲领域自适应的电子鼻漂移补偿方法 (Blind-field-self-adaptive-based electronic nose drift compensation method ) 是由 陶洋 杨皓诚 梁志芳 黎春燕 孔宇航 于 2019-11-18 设计创作,主要内容包括:本发明公开了一种基于盲领域自适应的电子鼻漂移补偿方法,该方法从特征层面出发,在盲领域场景下,仅使用源域样本构建稀疏极限学习机自编码器与稀疏极限学习机分类器。目标域样本通过稀疏极限学习机自编码器以得到其在源域空间下的表示,而后使用该表示与目标域样本原始特征相加完成特征增强过程,最后将增强后的样本输入到稀疏极限学习机分类器中以实现漂移样本的有效分类。本发明的优点在于模型训练的全过程无需目标域样本参与,更贴近于实际应用场景,同时在模型构建过程中引入范数约束实现了网络的稀疏,提升了特征表示的效果。(The invention discloses an electronic nose drift compensation method based on blind field self-adaptation. The target domain samples pass through a sparse extreme learning machine self-encoder to obtain the representation of the target domain samples in the source domain space, then the representation is used to be added with the original features of the target domain samples to complete the feature enhancement process, and finally the enhanced samples are input into a sparse extreme learning machine classifier to realize the effective classification of the drift samples. The method has the advantages that the whole process of model training does not need target domain samples, the method is closer to the actual application scene, meanwhile, the norm constraint is introduced in the model construction process, the sparsity of the network is realized, and the effect of characteristic representation is improved.)

1. An electronic nose drift compensation method based on blind field self-adaptation is characterized in that: the method performs feature learning of a self-encoder of a sparse extreme learning machine and label discrimination of a classifier of the sparse extreme learning machine through blind field self-adaptation without any target field sample, and completes compensation of samples collected by a sensor after drift in an electronic nose;

the mechanism specifically comprises the following steps:

s1) constructing a self-encoder of the sparse extreme learning machine and training the self-encoder through source domain sample characteristics;

s2) constructing a sparse extreme learning machine classifier and training the sparse extreme learning machine classifier through source domain sample characteristics and labels;

s3) using a sparse extreme learning machine self-encoder and a sparse extreme learning machine classifier to finish correct classification of the target domain samples, thereby realizing drift compensation of the electronic nose;

2. the blind-domain-adaptive-based electronic nose drift compensation method according to claim 1, wherein in the step S1, the building and training of the sparse extreme learning machine self-encoder comprises the following steps:

step S11) input source domain sample characteristics XSNumber of hidden layer nodes nhAnd a regular term coefficient λ, whereinnsIs the number of source domain samples, d is the characteristic dimension of the samples;

step S12), a three-layer neural network model is constructed, wherein the node number of the input layer and the output layer of the network is set as d, and the node number of the hidden layer is set as nhWith output layer target set to Oj,nh>d,Oj=XSJ represents the number of stages of the self-encoder training;

step S13) initializing an input weight matrix between an input layer and a hidden layer in the neural network by using a randomly generated orthogonal matrixAnd freezes, brings in XSPerforming a first stage of network training to obtain an output weight matrix between a hidden layer and an output layer

Step S14) useAs an input weight matrix for a second stage training, whereinBy bringing into XSPerforming second stage network training to obtain second stage output weight matrixSelf-encoder of sparse extreme learning machine at the momentI.e. set up and trained, wherein

3. The blind-domain-adaptive-based electronic nose drift compensation method according to claim 1, wherein in the step S2, the building and training of the sparse extreme learning machine classifier comprises the following steps:

step S21) input source domain sample characteristics XSSource field sample label YSAnd the number of different label classes c in the sample, wherein

Step S22), a three-layer neural network model is constructed, wherein the number of nodes of the network input layer is set as d, and the number of nodes of the hidden layer is set as nhThe number of nodes of the output layer is set to c, nhC, the output target is set as a source domain sample label YS

Step S23) input weight matrix of sparse extreme learning machine self-encoderAs input weights for sparse extreme learning machine classifier and freeze, use XSAnd YSClassifier for finishing sparse extreme learning machineIn which

4. The blind-domain-adaptive-based electronic nose drift compensation method according to claim 1, wherein the step S3 of classifying the target domain samples comprises the following steps:

step S31) of the target domain sample characteristics XTIs input to MATo obtain a representation of the target domain samples in the source domain spaceWhereinntIs the number of samples in the target domain;

step S32) pairFeature enhancement to obtain Z, i.e.Wherein

Step S33) input Z to MCTo obtain final classification prediction resultsWherein

Technical Field

The invention belongs to the field of odor identification of an electronic nose, and relates to an electronic nose drift compensation method based on blind field self-adaptation.

Background

The electronic nose is also called an artificial olfaction system, and is a system for gas identification, which consists of a gas sensor array and a pattern recognition algorithm. The key of the electronic nose which can simulate the human olfactory system to realize gas identification is that the gas sensor in the electronic nose can generate corresponding electric signal responses according to the characteristics of different gases, and the responses are finally converted into gas identification results through the processing of a pattern identification algorithm.

The sensor may drift due to aging of the sensor or poisoning of external gas. The drift can cause the output response of the sensor to change under the same environmental condition, so that the sample characteristics acquired by the sensor subjected to drift are different from the sample characteristics acquired when the drift does not occur under the same external condition, and the accuracy of the identification algorithm is reduced. Sensor drift problems are prevalent in electronic nose systems and are unavoidable.

In recent years, many algorithms for sensor drift compensation have been proposed, which can achieve drift compensation of a sensor to a certain extent, but they all use target domain samples, which are sensor output responses after drift, in a model training process, and in an actual application scenario, the sensor does not drift during gas recognition model training, and therefore, only source domain samples, which are output responses without drift, can be used for training. This domain adaptation problem, in which the model training phase cannot use the target domain samples, is also referred to as a blind domain adaptation problem. Therefore, how to complete effective adaptation of the target domain sample under the condition of the blind domain has a great influence on the correctness of the electronic nose gas discrimination result. The drift compensation method of the electronic nose based on the blind field self-adaption can realize the drift compensation of the sensor under the condition of not using any target field sample, and is more reasonable in the actual use scene.

Disclosure of Invention

In view of the above, the present invention provides an electronic nose drift compensation method based on blind field self-adaptation, which utilizes feature learning of a sparse extreme learning machine self-encoder and label discrimination of a sparse extreme learning machine classifier, obtains representation of a target field sample in a source field space through the self-encoder under the condition that no target field sample participates in model training, and then enhances similar features in the source field and the target field to shorten an inter-domain distance, thereby improving accuracy of classifier discrimination and realizing electronic nose drift compensation.

For simplicity of description, the following symbols are specified in this specification:

the source domain and the target domain are represented using S and T, respectively. Source domain sample channelAndis shown in whichA feature vector representing the ith sample in the source domain,indicating the label to which the sample corresponds. Target domain sample usageAndand (4) showing. n issAnd ntRespectively the number of the source domain samples and the number of the target domain samples, and the number of the nodes of the hidden layer is nhD is the feature dimension of the sample, and c is the number of different label categories in the sample.

The technical problem mainly solved by the invention is realized by the technical scheme, which comprises the following steps:

step 1), constructing a self-encoder of a sparse extreme learning machine;

step 2), constructing a sparse extreme learning machine classifier;

and 3) realizing correct classification of the drifting samples through the trained self-encoder of the sparse extreme learning machine and the classifier of the sparse extreme learning machine, and finishing the drifting compensation of the electronic nose.

Further, the step 1) specifically comprises the following steps:

step 11) source domain sample characteristics XSIn the training process of the self-encoder of the brought-in sparse extreme learning machine, the number n of nodes of a hidden layer is sethAnd a regular term coefficient λ, whereinnsIs the number of source domain samples, d is the characteristic dimension of the samples;

step 12) building a three-layer neural network model, wherein the number of nodes of a network input layer and an output layer is set as d, and the number of nodes of a hidden layer is set as nhWith output layer target set to Oj,nh>d,Oj=XSThe input weight matrix between the input layer and the hidden layer is expressed asThe output weight matrix between the hidden layer and the output layer isAndalso known as encoder and decoder weight matrices, j represents the number of stages trained from the encoder;

step 13) inputting a weight matrixInitializing the matrix into an orthogonal matrix which is randomly generated, fixing the value of the orthogonal matrix, not adjusting the matrix in the training process, and then passing through a source domain sample XSCompleting the self-encoder training of the first stage to obtain the output weight matrix

Step 14) taking outMaking a matrixInput weight matrices trained for the second stage to convey sample feature information learned from the first stage, i.e.FixingThen re-brought into the source domain sample XSPerforming a second stage of self-encoder training to obtain an output weight matrixAfter training is finished, the self-coder model of the sparse extreme learning machine can be obtainedWhereinThe step 2) specifically comprises the following steps:

step 21) source domain sample characteristics XSAnd source domain exemplar label YSIn a training process of a classifier of a sparse extreme learning machine, whereinSimultaneously bringing different label types c into the sample;

step 22) constructing a three-layer neural network model, wherein the number of nodes of a network input layer is set as d, and the number of nodes of a hidden layer is set as nhThe number of nodes of the output layer is set to c, nhC, the output target is set as a source domain sample label YSThe input weight matrix between the input layer and the hidden layer is represented as WIThe output weight matrix between the hidden layer and the output layer is WH

Step 23) input weight matrix WIWeight matrix initialized in self-encoder of sparse extreme learning machineAt the same time, the value is fixed and trainedThe matrix is not adjusted during the process, and then passes through the source domain sample XSAnd source domain exemplar label YSClassifier for finishing sparse extreme learning machineIn whichThe step 3) specifically comprises the following steps:

step 31) by MAObtaining target domain sample features XTRepresentation under source domain spaceWhereinntIs the number of samples in the target domain;

step 32) Using XTAndz is obtained by feature enhancement, i.e.Wherein

Step 33) bring Z as network input into MCTo obtain final classification prediction resultsWherein

The invention has the beneficial effects that: the method can effectively obtain the characteristic information of the source domain sample and add the information into the characteristic space of the target domain sample through characteristic enhancement, thereby obtaining better classification precision under the condition that the target domain sample does not participate in model training and effectively realizing the drift compensation of the sensor.

Drawings

In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings, in which:

FIG. 1 is a flow chart of the present method of the invention;

FIG. 2 is a block diagram of the model training process of the present method;

FIG. 3 is a block diagram of the model prediction process of the method.

Detailed Description

Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.

The invention provides an electronic nose drift compensation method based on blind field self-adaptation, which comprises the following steps as shown in figure 1:

step 1) Using XSSelf-encoder for training sparse extreme learning machine

Further, the step 1) comprises the following steps:

step 11) training model of self-encoder of sparse extreme learning machine as shown in FIG. 2, inputting source domain samplesNumber of hidden layer nodes nhAnd a regularization coefficient λ;

step 12) in the first stage of model training, the weight matrix of the encoder is usedRandom initialization, in order to make the input features more efficiently mapped into random subspaces, a randomly generated orthogonal matrix is selected for initializationThe output of the hidden layer node at this time is:

where b denotes the bias vector of the hidden layer node, g (-) denotes the activation function ReLU,and b need to satisfy the orthogonality condition:

to ensure the sparseness of the projection, the l1 norm is introducedThe output of the hidden layer node is constrained, and lambda represents a regular term coefficient, so that in the first stage of training, the output result O of the self-encoder of the sparse extreme learning machine1Expressed as:

step 13) after the first-stage training is finished, selecting the same network structure and input samples to carry out second-stage training to obtainTaking outThe matrix is used as the weight parameter of the encoder trained in the second stage to transfer the sample characteristic information learned in the first stage, i.e.The model training process is the same as the first stage, and the hidden layer output and the network output can be respectively expressed by the following two formulasRepresents:

at the moment, the self-encoder model of the sparse extreme learning machine is constructed and expressed asWherein

Step 2) Using XSAnd YSTraining sparse extreme learning machine classifier

Further, the step 2) comprises the following steps:

step 21) inputting the source domain sample characteristics X into a training model of the sparse extreme learning machine classifier as shown in FIG. 2SSource field sample label YSAnd the number of different label classes c in the sample, wherein

Step 22) bringing the input weight matrix of the self-encoder of the sparse extreme learning machine into the construction process of the classifier of the sparse extreme learning machine, setting the number of nodes of a network input layer as d, and setting the number of nodes of a hidden layer as nhThe number of nodes of the output layer is set to c, where nhIs consistent with self-encoder of sparse extreme learning machine and nhC, the output target is set as a source domain sample label YSThe input weight matrix is represented as WITo transportThe weighting matrix is represented as WHWhereinThe hidden layer output of the network is:

h(XS)=g(WIXS+b)

since the output of SELM-C is a sample label, unlike the output of SELM-AE which approximates the characteristics of the input samples, the number of nodes of the output layer is set to the number of classes C of classification samples, the activation function s (-) is chosen to be the softmax function, and the output of the classifier is expressed as:

O=s(h(XS)WH)

at the moment, the sparse extreme learning machine classifier model is constructed and expressed asWherein Hidden layer output is denoted hC(XS)=h(XS)。

Step 3) completing correct classification of the target domain samples through feature learning of a self-encoder of the sparse extreme learning machine and classification prediction of a classifier of the sparse extreme learning machine, and further realizing drift compensation of the electronic nose;

further, the step 3) comprises the following steps:

step 31) classification prediction process as shown in FIG. 3, the target domain sample characteristics X are combinedTIs input to MATo obtain a representation of the target domain samples in the source domain space

In the formulantIs the number of samples in the target domain;

step 32) useFor target domain sample XTFeature enhancement was performed to obtain an enhanced sample Z:

step 33) by MCCompleting classification prediction of sample data Z and outputting a prediction label of a target domain sample

I.e. drift gas sample characteristic XTThe corresponding gas class prediction label.

Finally, it is noted that the above-mentioned embodiments illustrate rather than limit the invention, and that, although the invention has been described in detail with reference to the above-mentioned embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention as defined by the appended claims.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种井下勘测机器人

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!