Feature extraction method and device based on stacked self-encoder and terminal equipment

文档序号:1719990 发布日期:2019-12-17 浏览:20次 中文

阅读说明:本技术 基于堆叠自编码器的特征提取方法、装置及终端设备 (Feature extraction method and device based on stacked self-encoder and terminal equipment ) 是由 *** 王莎 孙晓云 狄卫国 金安 杨小帆 于 2019-09-12 设计创作,主要内容包括:本发明提供了一种基于堆叠自编码器的特征提取方法、装置及终端设备,该方法应用于特征提取技术领域,所述方法包括:设定堆叠自编码器的当前自编码器个数k=1;基于改进的混合蛙跳算法确定k-自编码器的结构参数;基于k-自编码器的结构参数确定k-自编码器的重构误差;若k-自编码器的重构误差大于预设阈值,则在堆叠自编码器中增加一个自编码器,令k=k+1,并返回执行基于改进的混合蛙跳算法确定k-自编码器的结构参数的步骤;若k-自编码器的重构误差不大于预设阈值,则确定堆叠自编码器训练完成,并基于训练完成的堆叠自编码器对数据进行特征提取。本发明提供的基于堆叠自编码器的特征提取方法、装置及终端设备能够提高特征提取的速度和精度。(The invention provides a method, a device and a terminal device for extracting features based on a stacked self-encoder, wherein the method is applied to the technical field of feature extraction, and comprises the following steps: setting the number k of current self-encoders of the stacked self-encoders to be 1; determining structural parameters of a k-self encoder based on an improved mixed frog-leaping algorithm; determining a reconstruction error of the k-self encoder based on the structural parameters of the k-self encoder; if the reconstruction error of the k-self encoder is larger than a preset threshold value, adding a self encoder in the stacked self encoders, enabling k to be k +1, and returning to the step of determining the structural parameters of the k-self encoder based on the improved mixed leapfrogging algorithm; and if the reconstruction error of the k-self encoder is not larger than the preset threshold value, determining that the training of the stacked self encoder is finished, and extracting the characteristics of the data based on the trained stacked self encoder. The feature extraction method, the feature extraction device and the terminal equipment based on the stacked self-encoder can improve the speed and the precision of feature extraction.)

1. a method for extracting features based on a stacked self-encoder, the method comprising:

Setting the number k of current self-encoders of the stacked self-encoders to be 1;

Determining structural parameters of a k-self encoder based on an improved mixed frog-leaping algorithm;

Determining a reconstruction error of the k-self encoder based on the structural parameters of the k-self encoder;

If the reconstruction error of the k-self encoder is larger than a preset threshold value, adding a self encoder in the stacked self encoders, enabling k to be k +1, and returning to the step of determining the structural parameters of the k-self encoder based on the improved mixed leapfrogging algorithm;

And if the reconstruction error of the k-self encoder is not larger than the preset threshold value, determining that the training of the stacked self encoder is finished, and extracting the characteristics of the data based on the trained stacked self encoder.

2. the method for extracting features based on the stacked self-encoder according to claim 1, wherein the step of determining the structural parameters of the k-self-encoder based on the improved mixed frog-leaping algorithm comprises the following steps:

S1: randomly generating a preset number of frogs, and initializing the position parameters of each frog;

S2: calculating the fitness value of each frog based on the position parameter of the frog, sequencing all the frogs according to the fitness values, and determining the global optimal frog and the group distribution result of all the frogs according to the sequencing result of the fitness values of all the frogs;

s3: sequencing the frogs in each group according to the fitness value, and updating the optimal frogs and the worst frogs in each group according to the sequencing result of the fitness value of the frogs in each group and the global optimal frogs;

s4: repeatedly executing the step S3 until the execution number of the step S3 reaches a first preset number;

S5: repeatedly executing the steps S2-S4 until the execution frequency of the step S2 reaches a second preset frequency;

S6: and sequencing all frogs according to the fitness values, determining the global optimal frogs according to the sequencing results of the fitness values of all frogs, and taking the position parameters of the global optimal frogs as the structural parameters of the k-self encoder.

3. The method for extracting features based on stacked self-encoder as claimed in claim 2, wherein the method for determining the global optimal frogs according to the ranking result of fitness values of all frogs is: and determining the frog with the lowest fitness value as the global optimal frog.

4. the method of claim 2, wherein the determining the population distribution result of all frogs according to the ranking result of fitness values of all frogs comprises:

if the preset number of all frogs is P and the preset population number of all frogs is m, adding frogs with fitness values arranged at the nth m + i position to the ith population;

Wherein n, m, i,are integers, i is more than or equal to 1 and less than or equal to m,

5. The method of claim 2, wherein the optimal frog within the population is updated by:

A1: order z (i)b,new=z(i)b,old+rand(0,1)[zg-z(i)b,old+z(i)c-z(i)d];

A2: if L (z (i)b,new)>L(z(i)b,old) Then order z (i)b,new=z(i)b,old

Wherein, z (i)b,newFor the most optimal frog in the updated group i, z (i)b,oldFor the best frogs in the ith group before update, zgfor global optimal frog, z (i)cIs a frog of the second or third highest in the ith group, z (i)dIs a frog of the fourth or fifth excellence in the i-th group, L (z (i)b,new) To update the fitness value of the optimal frog in the ith group, L (z (i))b,old) The fitness value of the optimal frog in the ith group before updating is obtained.

6. the method of claim 2, wherein the worst frogs in the population are updated by:

B1: order z (i)w,new=z(i)w,old+rand(0,1)[z(i)b,new-z(i)w,old+z(i)c-z(i)d];

B2: if L (z (i)w,new)>L(z(i)w,old) Then order z (i)w,new=z(i)w,old+rand(0,1)[zg-z(i)w,old];

B3: if L (z (i)w,new)>L(z(i)w,old) Then, for z (i)w,newRandomly assigning values;

B4: if L (z (i)w,new)>L(z(i)w,old) Then order z (i)w,new=z(i)w,old

Wherein, z (i)w,newThe worst frog in the updated group i, z (i)w,oldThe worst frog in group i before update, z (i)cIs a frog of the second or third highest in the ith group, z (i)dIs a frog of the fourth or fifth excellence in the i-th group, L (z (i)w,new) Is the updated fitness value of the worst frog in group i, L (z (i)w,old) The fitness value of the worst frog in the ith group before updating.

7. the stacked self-encoder based feature extraction method of any one of claims 2-6, wherein the fitness value of the frog is determined by:

Wherein x is(n)Is the input to the k-autoencoder,is the output of the k-self-encoder, z (j) is the location parameter of the jth frog,When the location parameter of frog z (j) is used as the structure parameter of k-self-encoder, the fitness value of j frog is the input x of k-self-encoder(n)and the output of the k-autocoderThe dimension of (a);

Wherein z (j) ═ { w1(j)s*h,b1(j)h,w2(j)h*s,b2(j)s},w1(j)s*hand b1(j)hEncoder parameters for a k-autoencoder, w2(j)h*sAnd b2(j)sFor the decoder parameters of the k-self encoder, s and h are the node numbers of the input layer and the hidden layer of the k-self encoder respectively.

8. a feature extraction device based on a stacked self-encoder, comprising:

The counting module is used for setting the number k of the current self-encoders of the stacked self-encoders to be 1;

The parameter optimization module is used for determining the structural parameters of the k-self encoder based on the improved mixed frog-leaping algorithm;

the error determination module is used for determining the reconstruction error of the k-self encoder based on the structural parameters of the k-self encoder;

The loop module is used for adding an auto encoder in the stacked auto encoders if the reconstruction error of the k-auto encoder is larger than a preset threshold value, enabling k to be k +1, and returning to the step of determining the structural parameters of the k-auto encoder based on the improved mixed leapfrogging algorithm;

And the feature extraction module is used for determining that the training of the stacked self-encoder is finished if the reconstruction error of the k-self-encoder is not larger than a preset threshold value, and extracting features of the data based on the trained stacked self-encoder.

9. a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.

10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.

Technical Field

the invention belongs to the technical field of feature extraction, and particularly relates to a feature extraction method and device based on a stacked self-encoder and terminal equipment.

Background

The stacked self-encoder is used as a typical framework of deep learning, a series of simple high-order features capable of well expressing input data can be obtained through dimensionality reduction in a layer-by-layer greedy learning mode, and the stacked self-encoder has obvious advantages in the aspect of data feature extraction.

In the existing feature extraction process, when a stacked self-encoder is trained, the number of stacked self-encoders in the stacked self-encoder is difficult to determine, and structural parameters of each self-encoder are initialized randomly, so that the stacked self-encoder is low in convergence speed and low in convergence precision when the stacked self-encoder is trained. The low convergence speed of the stacked self-encoder can cause the network depth value of the trained stacked self-encoder to be larger, so that the speed of feature extraction is influenced, and the low convergence precision of the stacked self-encoder can influence the precision of feature extraction.

Disclosure of Invention

the invention aims to provide a method and a device for extracting features based on a stacked self-encoder and a terminal device, so as to improve the speed and the precision of feature extraction.

In a first aspect of the embodiments of the present invention, a method for feature extraction based on a stacked self-encoder is provided, including:

setting the number k of current self-encoders of the stacked self-encoders to be 1;

Determining structural parameters of a k-self encoder based on an improved mixed frog-leaping algorithm;

Determining a reconstruction error of the k-self encoder based on the structural parameters of the k-self encoder;

If the reconstruction error of the k-self encoder is larger than a preset threshold value, adding a self encoder in the stacked self encoders, enabling k to be k +1, and returning to the step of determining the structural parameters of the k-self encoder based on the improved mixed leapfrogging algorithm;

And if the reconstruction error of the k-self encoder is not larger than the preset threshold value, determining that the training of the stacked self encoder is finished, and extracting the characteristics of the data based on the trained stacked self encoder.

in a second aspect of the embodiments of the present invention, there is provided a feature extraction device based on a stacked self-encoder, including:

The counting module is used for setting the number k of the current self-encoders of the stacked self-encoders to be 1;

the parameter optimization module is used for determining the structural parameters of the k-self encoder based on the improved mixed frog-leaping algorithm;

The error determination module is used for determining the reconstruction error of the k-self encoder based on the structural parameters of the k-self encoder;

The loop module is used for adding an auto encoder in the stacked auto encoders if the reconstruction error of the k-auto encoder is larger than a preset threshold value, enabling k to be k +1, and returning to the step of determining the structural parameters of the k-auto encoder based on the improved mixed leapfrogging algorithm;

And the feature extraction module is used for determining that the training of the stacked self-encoder is finished if the reconstruction error of the k-self-encoder is not larger than a preset threshold value, and extracting features of the data based on the trained stacked self-encoder.

In a third aspect of the embodiments of the present invention, there is provided a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the above-mentioned feature extraction method based on a stacked self-encoder when executing the computer program.

in a fourth aspect of the embodiments of the present invention, a computer-readable storage medium is provided, which stores a computer program, and the computer program, when executed by a processor, implements the steps of the above-mentioned feature extraction method based on a stacked self-encoder.

The feature extraction method, the feature extraction device and the terminal equipment based on the stacked self-encoder have the advantages that: on one hand, the depth of the stacked self-encoder is determined based on the reconstruction error, so that the problem that the number of self-encoders in the stacked self-encoder cannot be determined in the prior art is effectively solved; on the other hand, different from the structural parameters of the random initialization self-encoder in the prior art, the embodiment of the invention adopts the improved mixed leapfrog algorithm to determine the structural parameters of the self-encoder, can effectively reduce the training times of each self-encoder and improve the training precision of the structural parameters of each self-encoder, thereby improving the convergence speed and precision of the whole stacked self-encoder network and further improving the speed and precision of feature extraction.

drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.

Fig. 1 is a schematic flowchart of a feature extraction method based on a stacked self-encoder according to an embodiment of the present invention;

FIG. 2 is a schematic flow chart of a method for extracting features based on a stacked self-encoder according to another embodiment of the present invention;

FIG. 3 is a schematic structural diagram of a feature extraction apparatus based on a stacked self-encoder according to an embodiment of the present invention;

Fig. 4 is a schematic block diagram of a terminal device according to an embodiment of the present invention.

Detailed Description

in order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

Referring to fig. 1, fig. 1 is a flowchart illustrating a feature extraction method based on a stacked self-encoder according to an embodiment of the present invention. The method comprises the following steps:

s101: the current number k of self-encoders of the stacked self-encoder is set to 1.

in this embodiment, k self-encoders (denoted as k-self-encoders) may be first set, and if the reconstruction error of the k-self-encoder is greater than the preset threshold, one self-encoder is added on the basis of the k-self-encoder, and the value of k is updated (let k be k +1) until the reconstruction error of the k-self-encoder is less than or equal to the preset threshold.

S102: and determining the structural parameters of the k-self encoder based on the improved mixed frog-leaping algorithm.

In the embodiment, the structural parameters of the k-autoencoder are not randomly generated any more, but are determined based on the improved mixed leapfrog algorithm, so that the convergence speed and the precision of the k-autoencoder can be effectively improved.

Compared with the method of updating only the worst frogs in the prior art (namely, the optimal frogs have no updating capability, which easily causes low updating rate of the frogs, low convergence precision and early maturity of frog groups), the improved mixed frog-leaping algorithm updates the optimal frogs and the worst frogs at the same time.

S103: and determining the reconstruction error of the k-self encoder based on the structural parameters of the k-self encoder.

in this embodiment, determining the reconstruction error of the k-autoencoder based on the structural parameters of the k-autoencoder can be detailed as follows:

And under the structural parameters of the current k-self encoder, inputting training data into the k-self encoder to obtain output data corresponding to the training data, and calculating the reconstruction error of the k-self encoder under the structural parameters of the current k-self encoder according to the training data and the output data corresponding to the training data.

s104: and if the reconstruction error of the k-self encoder is larger than a preset threshold value, adding a self encoder in the stacked self encoder, enabling k to be k +1, and returning to the step of determining the structural parameters of the k-self encoder based on the improved mixed leapfrogging algorithm.

In this embodiment, adding an auto-encoder to the stacked auto-encoder is: and adding an auto encoder on the basis of the current k-auto encoder, and taking the output of the current k-auto encoder as the input of the newly added auto encoder.

S105: and if the reconstruction error of the k-self encoder is not larger than the preset threshold value, determining that the training of the stacked self encoder is finished, and extracting the characteristics of the data based on the trained stacked self encoder.

in this embodiment, the data to be extracted can be directly input to the trained stacked self-encoder for feature extraction.

As can be seen from the above description, in one aspect, the depth of the stacked self-encoder is determined based on the reconstruction error in the embodiments of the present invention, which effectively solves the problem in the prior art that the number of self-encoders in the stacked self-encoder cannot be determined. On the other hand, different from the structural parameters of the random initialization self-encoder in the prior art, the embodiment of the invention adopts the improved mixed leapfrog algorithm to determine the structural parameters of the self-encoder, can effectively reduce the training times of each self-encoder and improve the training precision of the structural parameters of each self-encoder, thereby improving the convergence speed and precision of the whole stacked self-encoder network and further improving the speed and precision of feature extraction.

Referring to fig. 1 and fig. 2 together, fig. 2 is a schematic flow chart illustrating a feature extraction method based on a stacked self-encoder according to another embodiment of the present application. On the basis of the above embodiment, step S102 can be detailed as follows:

S1: a preset number of frogs are randomly generated, and the position parameters of each frog are initialized.

S2: calculating the fitness value of each frog based on the position parameter of the frog, sequencing all the frogs according to the fitness values, and determining the global optimal frog and the swarm distribution result of all the frogs according to the sequencing result of the fitness values of all the frogs.

s3: and sequencing the frogs in each group according to the fitness value, and updating the optimal frogs and the worst frogs in each group according to the sequencing result of the fitness value of the frogs in each group and the global optimal frogs.

S4: the step S3 is repeatedly performed until the number of times of performing the step S3 reaches the first preset number of times.

S5: the steps S2-S4 are repeatedly executed until the number of execution of the step S2 reaches a second preset number.

S6: and sequencing all frogs according to the fitness values, determining the global optimal frogs according to the sequencing results of the fitness values of all frogs, and taking the position parameters of the global optimal frogs as the structural parameters of the k-self encoder.

in fig. 2, num1 is the execution count of step S3, num2 is the execution count of step S2 (or steps S2 to S4), T1 is a first preset count, and T2 is a second preset count.

In this embodiment, the improved mixed frog leap can be logically divided into two parts: local optimization and global optimization:

The local optimization mainly includes steps S3 and S4, which mainly achieves local optimization by continuously updating (T1 times) the optimal frogs and the worst frogs within each group.

The global optimization mainly includes steps S2 to S5, which mainly implements global optimization by continuously generating clusters (generating T2 times) and locally optimizing the clusters.

In this embodiment, after determining the preset number P of all frogs, the k-self encoder structure parameter determination problem can be converted into an improved optimization problem of the hybrid frog-leaping algorithm by defining the position parameter z (j) of each frog as the structure parameter of the k-self encoder:

The specific definition is as follows: z (j) ═ { w1(j)s*h,b1(j)h,w2(j)h*s,b2(j)s},w1(j)s*hand b1(j)hEncoder parameters for a k-autoencoder, w2(j)h*sAnd b2(j)sFor the decoder parameters of the k-self encoder, s and h are the node numbers of the input layer and the hidden layer of the k-self encoder respectively.

Before step S1, a process of setting the attribute parameters of the improved leapfrog mixing algorithm may be further included, where the attribute parameters include, but are not limited to: a preset number P of frogs (i.e., the total number of frogs), a group number m of frogs, a first preset number of times T1, a second preset number of times T2, and the like.

Optionally, as a specific implementation manner of the feature extraction method based on the stacked self-encoder provided in the embodiment of the present invention, the method for determining the global optimal frogs according to the rank ordering results of the fitness values of all frogs includes: and determining the frog with the lowest fitness value as the global optimal frog.

In the embodiment, the frog with the lowest fitness value is determined as the globally optimal frog, namely the frog with the lowest reconstruction error is used as the structural parameter of the k-self encoder, and the method can convert the structural parameter determination problem of the k-self encoder into the optimization problem of the improved hybrid frog-leaping algorithm.

The improved mixed frog-leaping algorithm updates the optimal frog and the worst frog in the swarm simultaneously when local optimization is carried out, can effectively improve the convergence speed and precision of the improved mixed frog-leaping algorithm, and also improves the convergence speed and precision of the k-self encoder training.

Optionally, as a specific implementation manner of the feature extraction method based on the stacked self-encoder provided by the embodiment of the present invention, determining the population distribution result of all frogs according to the fitness value ranking result of all frogs includes:

And if the preset number of all the frogs is P and the preset population number of all the frogs is m, adding the frogs with the fitness values arranged at the nth m + i position to the ith population.

Wherein n, m, i,Are integers, i is more than or equal to 1 and less than or equal to m,

In this embodiment, if the preset number of all frogs is 10, the fitness values are 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10, and the preset number of all frogs is 5, the result of assigning all frogs to the group is:

a first family: {1, 6}, group ii: {2, 7}, group iii: {3, 8}, group iv: {4, 9}, group five: {5, 10}.

Optionally, as a specific implementation manner of the feature extraction method based on the stacked self-encoder provided in the embodiment of the present invention, the method for updating the optimal frogs in the swarm includes:

A1: order z (i)b,new=z(i)b,old+rand(0,1)[zg-z(i)b,old+z(i)c-z(i)d]。

A2: if L (z (i)b,new)>L(z(i)b,old) Then order z (i)b,new=z(i)b,old

Wherein, z (i)b,newFor the most optimal frog in the updated group i, z (i)b,oldfor the best frogs in the ith group before update, zgFor global optimal frog, z (i)cIs a frog of the second or third highest in the ith group, z (i)dIs a frog of the fourth or fifth excellence in the i-th group, L (z (i)b,new) To update the fitness value of the optimal frog in the ith group, L (z (i))b,old) The fitness value of the optimal frog in the ith group before updating is obtained.

In this embodiment, the frog that is optimal in the group is updated, that is, the position information of the optimal frog in the group is updated. The frog with the optimal fitness value in the group is the frog with the minimum fitness value in the group.

Optionally, as a specific implementation manner of the feature extraction method based on the stacked self-encoder provided in the embodiment of the present invention, the method for updating the worst frogs in the swarm includes:

b1: order z (i)w,new=z(i)w,old+rand(0,1)[z(i)b,new-z(i)w,old+z(i)c-z(i)d]。

B2: if L (z (i)w,new)>L(z(i)w,old) Then order z (i)w,new=z(i)w,old+rand(0,1)[zg-z(i)w,old]。

B3: if L (z (i)w,new)>L(z(i)w,old) Then, for z (i)w,newand (6) randomly assigning values.

B4: if L (z (i))w,new)>L(z(i)w,old) Then order z (i)w,new=z(i)w,old

Wherein, z (i)w,newThe worst frog in the updated group i, z (i)w,oldThe worst frog in group i before update, z (i)cIs a frog of the second or third highest in the ith group, z (i)dIs a frog of the fourth or fifth excellence in the i-th group, L (z (i)w,new) Is the updated fitness value of the worst frog in group i, L (z (i)w,old) The fitness value of the worst frog in the ith group before updating.

In this embodiment, the worst frogs in the group are updated, that is, the position information of the worst frogs in the group is updated. Wherein, the worst frog in the group refers to the frog with the maximum fitness value in the group.

Optionally, as a specific implementation manner of the feature extraction method based on the stacked self-encoder provided in the embodiment of the present invention, the method for determining the fitness value of the frog is:

wherein x is(n)Is the input to the k-autoencoder,is the output of the k-self-encoder, z (j) is the location parameter of the jth frog,when the location parameter of frog z (j) is used as the structure parameter of k-self-encoder, the fitness value of j frog is the input x of k-self-encoder(n)and the output of the k-autocoderThe dimension of (a);

Wherein z (j) ═ { w1(j)s*h,b1(j)h,w2(j)h*s,b2(j)s},w1(j)s*hAnd b1(j)hIs k-self-plaitedEncoder parameters of the encoder, w2(j)h*sAnd b2(j)sFor the decoder parameters of the k-self encoder, s and h are the node numbers of the input layer and the hidden layer of the k-self encoder respectively.

Corresponding to the above embodiment of the method for extracting features based on a stacked self-encoder, fig. 3 is a block diagram of a device for extracting features based on a stacked self-encoder according to an embodiment of the present invention. For convenience of explanation, only portions related to the embodiments of the present invention are shown. Referring to fig. 3, the apparatus includes: a counting module 310, a parameter optimization module 320, an error determination module 330, a loop module 340, a feature extraction module 350.

The counting module 310 is configured to set the current number k of the self-encoders of the stacked self-encoders to 1.

And the parameter optimization module 320 is used for determining the structural parameters of the k-self encoder based on the improved mixed frog leap algorithm.

and an error determination module 330, configured to determine a reconstruction error of the k-autoencoder based on the structural parameters of the k-autoencoder.

And the loop module 340 is configured to add an autoencoder in the stacked autoencoders if the reconstruction error of the k-autoencoder is greater than a preset threshold, so that k is k +1, and return to the step of determining the structural parameters of the k-autoencoder based on the improved hybrid leapfrog algorithm.

And a feature extraction module 350, configured to determine that the training of the stacked self-encoder is completed if the reconstruction error of the k-self-encoder is not greater than a preset threshold, and perform feature extraction on the data based on the trained stacked self-encoder.

Optionally, as a specific implementation manner of the feature extraction apparatus based on a stacked self-encoder provided in the embodiment of the present invention, the method for determining the global optimal frogs according to the rank ordering result of the fitness values of all frogs includes: and determining the frog with the lowest fitness value as the global optimal frog.

optionally, as a specific implementation manner of the stacked self-encoder-based feature extraction apparatus provided in the embodiment of the present invention, the determining the population distribution result of all frogs according to the fitness value ranking result of all frogs includes:

And if the preset number of all the frogs is P and the preset population number of all the frogs is m, adding the frogs with the fitness values arranged at the nth m + i position to the ith population.

Wherein n, m, i,are integers, i is more than or equal to 1 and less than or equal to m,

Optionally, as a specific implementation manner of the feature extraction apparatus based on the stacked self-encoder provided in the embodiment of the present invention, the method for updating the optimal frogs in the swarm includes:

A1: order z (i)b,new=z(i)b,old+rand(0,1)[zg-z(i)b,old+z(i)c-z(i)d]。

A2: if L (z (i)b,new)>L(z(i)b,old) Then order z (i)b,new=z(i)b,old

Wherein, z (i)b,newFor the most optimal frog in the updated group i, z (i)b,oldfor the best frogs in the ith group before update, zgFor global optimal frog, z (i)cis a frog of the second or third highest in the ith group, z (i)dIs a frog of the fourth or fifth excellence in the i-th group, L (z (i)b,new) To update the fitness value of the optimal frog in the ith group, L (z (i))b,old) The fitness value of the optimal frog in the ith group before updating is obtained.

optionally, as a specific implementation manner of the feature extraction apparatus based on the stacked self-encoder provided in the embodiment of the present invention, the method for updating the worst frogs in the group includes:

B1: order z (i)w,new=z(i)w,old+rand(0,1)[z(i)b,new-z(i)w,old+z(i)c-z(i)d]。

b2: if L (z (i)w,new)>L(z(i)w,old) Then order z (i)w,new=z(i)w,old+rand(0,1)[zg-z(i)w,old]。

B3: if L (z (i)w,new)>L(z(i)w,old) Then, for z (i)w,newand (6) randomly assigning values.

B4: if L (z (i)w,new)>L(z(i)w,old) Then order z (i)w,new=z(i)w,old

Wherein, z (i)w,newThe worst frog in the updated group i, z (i)w,oldthe worst frog in group i before update, z (i)cIs a frog of the second or third highest in the ith group, z (i)dIs a frog of the fourth or fifth excellence in the i-th group, L (z (i)w,new) Is the updated fitness value of the worst frog in group i, L (z (i)w,old) The fitness value of the worst frog in the ith group before updating.

Optionally, as a specific implementation manner of the feature extraction apparatus based on the stacked self-encoder provided in the embodiment of the present invention, the method for determining the fitness value of the frog includes:

Wherein x is(n)Is the input to the k-autoencoder,Is the output of the k-self-encoder, z (j) is the location parameter of the jth frog,When the location parameter of frog z (j) is used as the structure parameter of k-self-encoder, the fitness value of j frog is the input x of k-self-encoder(n)And the output of the k-autocoderThe dimension of (a);

Wherein z (j) ═ { w1(j)s*h,b1(j)h,w2(j)h*s,b2(j)s},w1(j)s*hand b1(j)hencoder parameters for a k-autoencoder, w2(j)h*sAnd b2(j)sFor the decoder parameters of the k-self encoder, s and h are the node numbers of the input layer and the hidden layer of the k-self encoder respectively.

Referring to fig. 4, fig. 4 is a schematic block diagram of a terminal device according to an embodiment of the present invention. The terminal 400 in the present embodiment shown in fig. 4 may include: one or more processors 401, one or more input devices 402, one or more output devices 403, and one or more memories 404. The processor 401, the input device 402, the output device 403 and the memory 404 are all in communication with each other via a communication bus 405. The memory 404 is used to store a computer program comprising program instructions. Processor 401 is operative to execute program instructions stored in memory 404. Wherein the processor 401 is configured to invoke program instructions to perform the following functions of operating the modules/units in the above-described device embodiments, such as the functions of the modules 310 to 350 shown in fig. 3.

It should be understood that, in the embodiment of the present invention, the Processor 401 may be a Central Processing Unit (CPU), and the Processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

The input device 402 may include a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of the fingerprint), a microphone, etc., and the output device 403 may include a display (LCD, etc.), a speaker, etc.

the memory 404 may include a read-only memory and a random access memory, and provides instructions and data to the processor 401. A portion of the memory 404 may also include non-volatile random access memory. For example, the memory 404 may also store device type information.

In a specific implementation, the processor 401, the input device 402, and the output device 403 described in this embodiment of the present invention may execute the implementation manners described in the first embodiment and the second embodiment of the feature extraction method based on a stacked self-encoder provided in this embodiment of the present invention, and may also execute the implementation manner of the terminal described in this embodiment of the present invention, which is not described herein again.

In another embodiment of the present invention, a computer-readable storage medium is provided, in which a computer program is stored, where the computer program includes program instructions, and the program instructions, when executed by a processor, implement all or part of the processes in the method of the above embodiments, and may also be implemented by a computer program instructing associated hardware, and the computer program may be stored in a computer-readable storage medium, and the computer program, when executed by a processor, may implement the steps of the above methods embodiments. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable medium may include any suitable increase or decrease as required by legislation and patent practice in the jurisdiction, for example, in some jurisdictions, computer readable media may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.

the computer readable storage medium may be an internal storage unit of the terminal of any of the foregoing embodiments, for example, a hard disk or a memory of the terminal. The computer readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk provided on the terminal, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the terminal. The computer-readable storage medium is used for storing a computer program and other programs and data required by the terminal. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.

Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the terminal and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the several embodiments provided in the present application, it should be understood that the disclosed terminal and method can be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.

units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.

In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

while the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:监督数据生成装置以及监督数据生成方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!