Identity recognition method based on block chain mutual authentication, biological multi-feature recognition and multi-source data fusion

文档序号:69007 发布日期:2021-10-01 浏览:22次 中文

阅读说明:本技术 基于区块链互证和生物多特征识别及多源数据融合的身份识别方法 (Identity recognition method based on block chain mutual authentication, biological multi-feature recognition and multi-source data fusion ) 是由 朱全银 马天龙 高尚兵 徐莹莹 马思伟 朱燕妮 王媛媛 周泓 冯远航 章磊 魏丹 于 2021-06-24 设计创作,主要内容包括:本发明公开了一种基于区块链互证和生物多特征识别及多源数据融合的身份识别方法,适用于普遍的身份识别方法和基于区块链互证的签到问题。这种基于ANP的数据融合方法是基于卷积神经网络来进行特征抽取并利用传统机器学习算法进行分类,最后使用区块链互证方式进行验证并将数据融合,首先接收用户发来的需要进行识别的照片和语音信息,而后调用目标检测算法对图片中的人脸信息进行识别,随后调用声纹识别算法进行识别,最后利用网络图片互证方式进行识别结果的二次验证,最后将识别的结果进行融合存储于签到系统中。本发明可以有效识别生物特征,并通过互证的方式可以准确地进行二次验证,并将验证数据进行融合,可以准确的进行签到的记录。(The invention discloses an identity recognition method based on block chain mutual authentication, biological multi-feature recognition and multi-source data fusion, which is suitable for a common identity recognition method and a check-in problem based on block chain mutual authentication. The data fusion method based on the ANP is characterized by extracting features based on a convolutional neural network, classifying the features by using a traditional machine learning algorithm, verifying the features by using a block chain mutual authentication mode and fusing the data, receiving photos and voice information which are sent by a user and need to be recognized, calling a target detection algorithm to recognize face information in the photos, calling a voiceprint recognition algorithm to recognize the face information, verifying a recognition result secondarily by using a network picture mutual authentication mode, and fusing and storing the recognition result in a check-in system. The invention can effectively identify biological characteristics, accurately carry out secondary verification by mutual authentication, fuse verification data and accurately record check-in.)

1. An identity recognition method based on block chain, biological multi-feature recognition and multi-source data fusion is characterized by comprising the following steps:

(1) setting an initial image data set of the acquired wireless network list as W, a text data set converted from a network picture as WT, clustering and marking network sample points generated in the WT in the text data set by using a CURE algorithm, and calculating a vote number according to outliers and aggregation points to mark the vote number as SC1

(2) Setting the acquired face initial image data set to be Fa, carrying out characteristic value identification on the submitted picture and the face initial image data set Fa, acquiring similarity score, and recording as SC2

(3) Input speech signal data set S3For the speech signal data set S3Pre-emphasis and framing the voice signal to obtain MFCC characteristic parameters, and obtaining sound through GMM Gaussian mixture modelTexture similarity score, denoted SC3

(4) The obtained calculated number of votes SC1Similarity score SC2Voice print similarity score SC3As input, establishing a comparison matrix for pairwise comparison of the feature scores, and calculating the vote number SC by using an AHP method1Similarity score SC2Voice print similarity score SC3Performing fusion calculation to obtain AHP weight, and marking the AHP weight as N;

(5) and fusing the check-in data according to the weight, establishing a data table, encrypting the fusion result, storing the fusion result as a final check-in result, outputting the final check-in result through a webpage end, and generating different check-in tables according to the fusion check-in result and the characteristic check-in result for a user to download.

2. The identity recognition method based on the blockchain, biological multi-feature recognition and multi-source data fusion according to claim 1, wherein the step (1) specifically comprises the following steps:

(1.1) inputting an initial image data set W of a wireless network list, defining a set X as the number of pictures uploaded by an object to be identified, and defining a function len (X) to represent the length of the set X, wherein W is { W ═ W { (X)1,W2,…,WMIn which WMRepresents the Mth image in W, M belongs to [1, len (W)];

(1.2) defining a loop variable i1 for traversing W, i1 ∈ [1, len (W) ], i1 having an initial value of 1;

(1.3) if i1 ≦ len (W), entering step (1.4), otherwise, entering step (1.10);

(1.4) to Wi1Denoising to obtain Deno _ Si1

(1.5) denoising image Deno _ Wi1Performing image enhancement processing to obtain an enhanced image enhancement _ Wi1

(1.6) Enhance _ W for the enhanced imagei1Scaling to obtain a scaled image zom _ Wi1

(1.7) scaling the image zom _ Wi1Performing feature extraction to obtain a feature image sha _ Wi1

(1.8) feature image sha _ W by using character classifieri1Performing character recognition and extracting character information, and putting the obtained character information into a WT;

(1.9) i1 ═ i1+1, go to step (1.3);

(1.10) finishing the extraction of the WIFI information characters;

(1.11) defining a cycle variable Bt, assigning an initial value Bt to be 0, and defining the maximum cycle times Bn as the number of users who send pictures currently;

(1.12) defining a hash table FS to record voting and information of an object to be identified, wherein a key is SF to represent picture information submitted by the object to be identified, another table Cm is defined to represent the number of votes obtained by the object to be identified, the value of the FS table is the hash table Cm, the key of Cm is the name of the object to be identified corresponding to the current sent picture, and the value is the number of votes obtained by the object to be identified;

(1.13) the SF corresponding to Bt exists in FS;

(1.14) taking the FS as a parent table, and adding a newly-built hash table Cmi into the parent table FS;

(1.15) the vote corresponding to Bt exists in Cm;

(1.16) setting the newly-built key as a voting object corresponding to Bt, setting the value to be 1 and storing the value into Cm;

(1.17) converting the acquired WT into different hot spots S, and taking out random hot spots as random sample points S by using a CURE algorithmi

(1.18) random sample points SiDivided into groups denoted as Pi(ii) a Using CURE algorithm to PiClustering, the clustering points marked as GiThe outliers are marked as Oi

(1.19) recording the value of +1 or-1 of the corresponding vote in Cm as H according to the outlier and the clustering point, and defining the total number of votes as H1;

(1.20) the ratio of the number of votes obtained H to the total number of votes obtained H1 is recorded as SC1

(1.21) if SC1<And omega, judging that the position verification fails, namely that the network picture information submitted by the current object to be recognized is not matched with the network picture information submitted by other objects to be recognized, wherein omega is a network picture information similarity threshold set according to the verification total number and the vote number.

3. The identity recognition method based on the blockchain, biological multi-feature recognition and multi-source data fusion according to claim 1, wherein the step (2) specifically comprises the following steps:

(2.1) defining a face detection target object, and establishing a table for storing object information to be recognized and face information, and recording the table as Fa; define the function len (Fa) as the length of the set Fa, let Fa be { Fa }1,Fa2,…,FasWherein, FasRepresents the S-th image in Fa, S belongs to [1, len (Fa)];

(2.2) defining a loop variable j1 for traversing Fa, j1 ∈ [1, len (Fa) ], j1 having an initial value of 1;

(2.3) traversing Fa, if j1 is less than or equal to len (Fa), jumping to the step (2.4), and if not, ending the traversal Fa, and jumping to the step (2.21);

(2.4) processing Fa by using a Haar characteristic;

(2.5) loading an Adaboost classifier, detecting and segmenting Fa, and circularly detecting the face of the object;

(2.6) defining a current key frame face obtaining flag state d _ flag, wherein when the d _ flag is 1, the object detects the face, and when the d _ flag is 0, the object does not detect the face;

(2.7) if d _ flag is 1, jumping to step (2.8), otherwise, jumping to step (2.17);

(2.8) to the face area FafCarrying out normalization processing to obtain a face normalization region F;

(2.9) extracting face LBP (local binary pattern) characteristics from the face normalization region F by using an LBP characteristic operator to obtain a face characteristic histogram Ff

(2.10) if the system already detects the target object, jumping to the step (2.11), otherwise, jumping to (2.16);

(2.11) inputting a detection image G, and respectively calculating a human face feature histogram G of the detection image G by using the chi-square distancef

(2.12) calculating the face feature histogram set F by using the chi-square distancef={F1 f,F2 f,…,Fn f,…FN fNormalizing the distance of each face feature histogram to obtain a face feature distance set DISf={dis1 f,dis2 f,…,disn f,…disN f};disn fA face feature histogram G representing the detected image GfAnd Fa face feature histogram FafThe face feature distance of (1);

(2.13) set of distances to face features DISf={dis1 f,dis2 f,…,disn f,…disN fCarrying out self-adaptive weighted fusion processing, arranging the fused characteristic distances in ascending order to obtain an optimal characteristic distance set DISopt

(2.14) if the optimal feature distance set DISoptIf any element in the list is larger than the set distance threshold value, jumping to the step (2.16), otherwise, jumping to the step (2.15);

(2.15) successfully identifying, and returning the optimal characteristic distance set DISoptIdentity information of a person corresponding to the minimum distance; for the initial feature distance set { dis1 Wf,Wp,dis2 Wf,Wp,…,disn Wf,Wp,…disN Wf,WpSorting in ascending order, and calculating the Mean value Mean (Y) of the first Y characteristic distances and the Mean value Mean (N-Y) from the Y +1 th characteristic distance to the Nth characteristic distance; 1<=Y<=N;

Self-adaption reliability judgment is carried out by using the formula (1) to obtain a similarity score delta ═ Mean (N) -Mean (N-Y);

(2.16) creating new detection targets, creating a feature list of each detection target, and storing the feature list in Fa;

(2.17) if the system already tracks the target object, jumping to the step (2.19), otherwise, jumping to the step (2.20);

(2.18) adding the extracted features to a feature list of each detection target;

(2.19) predicting the position of the next frame of each detected target in the object by using a Kalman observer, and clearing the detector which is not matched with the target for a long time;

(2.20) i1 ═ i1+1, go to step (2.3);

(2.21) obtaining a video frame face position set Fa ═ { Fa ═ Fa1,Fa2,…,Fas} feature similarity score SC2Wherein, FasShowing the S-th image in Fa.

4. The identity recognition method based on the blockchain, biological multi-feature recognition and multi-source data fusion according to claim 1, wherein the step (3) specifically comprises the following steps:

(3.1) input Speech Signal data set S3

(3.2) carrying out pre-emphasis and framing processing on the voice signals; windowing, Fourier transform, Mel frequency filter bank filtering and discrete cosine transform are carried out on each frame of voice signals after framing processing, MFCC characteristic parameters are obtained, and an MFCC sequence is obtained;

(3.3) training the GMM Gaussian mixture model by using the MFCC sequence to obtain a characteristic parameter sequence X of the GMM Gaussian mixture model3

(3.4) Framing the speech signal, dividing it into T sections, calculating MFCC sequence, denoted as Y, for each section of speech signalt

(3.5) for the voiceprint feature vector sequence Yt={Y1,Y2,Y3,…,YNProcessing to obtain the characteristic parameter lambda of GMM Gaussian mixture model so as to ensure that the characteristic vector sequence YtThe likelihood probability of (2) is maximum;

(3.6) all T sections of MFCC sequence YtThe user voiceprint characteristic sequence Y is obtained by tandem connectionaWill sequence YaInputting GMM Gaussian mixture model to calculate posterior probability to obtain score SC of voiceprint similarity3

5. The identity recognition method based on the blockchain, biological multi-feature recognition and multi-source data fusion according to claim 1, wherein the step (4) specifically comprises the following steps:

(4.1) using the human face, the voiceprint recognition data and the hot spot data as criterion layers C1, C2 and C3;

(4.2) defining the check-in results as K1, K2 and K3 as scheme layers respectively;

(4.3) defining the maximum final check-in rate to be O as a target layer;

(4.4) defining a scaling method of the judgment matrix, wherein the scaling method is used for comparing the two factors and determining the importance degrees of the two factors;

(4.5) aiming at the criterion layer, the scheme layer and the target layer, establishing a judgment matrix by using an analytic hierarchy process, calculating a maximum characteristic root lambda max, carrying out normalization processing on the lambda max, recording the normalized root lambda max as Nor, and calculating a consistency ratio CR 1;

(4.6) if CR1<0.1, then go to step (4.5), otherwise go to step (4.7);

(4.7) carrying out overall hierarchical ordering on the alignment measurement layer and the target layer to calculate a consistency ratio CR 2;

(4.8) carrying out overall hierarchical ordering on the alignment measurement layer and the scheme layer to calculate a consistency ratio CR 3;

(4.9) if CR1<0.1 then step (4.8) is entered, otherwise step (4.7) is entered.

(4.10) obtaining the weight value N according to the decisions of CR2 and CR 3.

6. The identity recognition method based on the blockchain, biological multi-feature recognition and multi-source data fusion according to claim 1, wherein the step (5) specifically comprises the following steps:

(5.1) according to the weight value N obtained in the step (4), carrying out weighted fusion on the face, the voiceprint and the hot spot sign-in information to obtain a final fusion result, and marking the final fusion result as A;

(5.2) defining table names Sid, name, swift, Sage, Sface and SFU of the database of the attendance system as ID, name, wireless network list picture, age, face picture, feature tag in voice and attendance data fusion table of a single object to be identified respectively, and meeting St { Sid, name, swift, Sage, Sface and SFU };

(5.3) defining a cycle variable St, giving an initial value St as 0, and defining the maximum cycle number Sn as the number of the objects to be identified of the current sent picture;

(5.4) if St < Sn then go to step (5.5) otherwise go to step (5.11);

(5.5) creating an attendance data fusion table SFU;

(5.6) fusing the weighted value N and the corresponding characteristic value and writing the fused value into an attendance data fusion table SFU;

(5.7) calculating the number of votes SC obtained in the steps (1), (2) and (3)1Similarity score SC2Voice print similarity score SC3Writing the data into a table swift, Sface, Svoice;

(5.8) setting the number of votes SC to be counted1Similarity score SC2Voice print similarity score SC3If A is the fusion threshold of>δ; skipping to the step (5.9), otherwise, skipping to the step (5.10);

(5.9) marking the check-in result of the object to be identified as successful to be written into the database table SFU;

(5.10) marking the check-in result of the object to be identified as an error and writing the error into a database table SFU;

and (5.11) outputting information in the database table St through a webpage end, and generating different sign-in tables according to the fused sign-in result and the characteristic sign-in result for downloading by a user.

Technical Field

The invention relates to a data fusion and feature recognition method, in particular to an identity recognition method based on a block chain, biological multi-feature recognition and multi-source data fusion.

Background

In recent years, biometric identification techniques have been developed rapidly, and people have been increasingly focused on techniques for identifying individuals by image processing, identification, and the like. The need for identity authentication based on biometric technology is increasing, and the traditional biometric technology can be counterfeited, for example, by forging fingerprints or human faces to perform identity authentication, so that the single-mode biometric system has limitations in matching progress, difficulty and universality. Therefore, the method has important practical significance in places such as schools, enterprises and the like, can provide an effective class check-in result set for the schools, and can reduce potential safety hazards of enterprises due to the fact that technical staff take the place of attendance.

In the aspect of biological characteristic authentication, at present, most researches on unilateral processing of main problems such as human faces, fingerprints, irises, sounds and the like are carried out, biological multi-characteristic fusion researches are lacked, and information fusion is single.

Disclosure of Invention

The purpose of the invention is as follows: aiming at the problems in the prior art, the invention provides an identity identification method based on a block chain, biological multi-feature identification and multi-source data fusion, and aims to solve the problem of sign-in under multi-biological features.

The technical scheme is as follows: in order to solve the technical problem, the invention provides an identity recognition method based on a block chain, biological multi-feature recognition and multi-source data fusion, which is characterized by comprising the following steps of:

(1) setting an initial image data set of the acquired wireless network list as W, a text data set converted from a network picture as WT, clustering and marking network sample points generated in the WT in the text data set by using a CURE algorithm, and calculating a vote number according to outliers and aggregation points to mark the vote number as SC1

(2) Setting the acquired face initial image data set to be Fa, carrying out characteristic value identification on the submitted picture and the face initial image data set Fa, acquiring similarity score, and recording as SC2

(3) Input speech signal data set S3For the speech signal data set S3Pre-emphasis and framing the voice signal to obtain MFCC characteristic parameters, and obtaining the voice print similarity score, which is recorded as SC, through GMM Gaussian mixture model3

(4) The obtained calculated number of votes SC1Similarity score SC2Voice print similarity score SC3As input, establishing a comparison matrix for pairwise comparison of the feature scores, and calculating the vote number SC by using an AHP method1Similarity score SC2Voiceprint similarity score SC3Performing fusion calculation to obtain AHP weight, and marking the AHP weight as N;

(5) and fusing the check-in data according to the weight, establishing a data table, encrypting the fusion result, storing the fusion result as a final check-in result, outputting the final check-in result through a webpage end, and generating different check-in tables according to the fusion check-in result and the characteristic check-in result for a user to download.

Further, the step (1) specifically includes the following steps:

(1.1) inputting an initial image data set W of a wireless network list, defining a set X as the number of pictures uploaded by an object to be identified, and defining a function len (X) to represent the length of the set X, wherein W is { W ═ W { (X)1,W2,…,WMIn which WMRepresents the Mth image in W, M belongs to [1, len (W)];

(1.2) defining a loop variable i1 for traversing W, i1 ∈ [1, len (W) ], i1 having an initial value of 1;

(1.3) if i1 ≦ len (W), entering step (1.4), otherwise, entering step (1.10);

(1.4) to Wi1Denoising to obtain Deno _ Si1

(1.5) denoising image Deno _ Wi1Performing image enhancement processing to obtain an enhanced image enhancement _ Wi1

(1.6) Enhance _ W for the enhanced imagei1Scaling to obtain a scaled image zom _ Wi1

(1.7) scaling the image zom _ Wi1Performing feature extraction to obtain a feature image sha _ Wi1

(1.8) feature image sha _ W by using character classifieri1Performing character recognition and extracting character information, and putting the obtained character information into a WT;

(1.9) i1 ═ i1+1, go to step (1.3);

(1.10) finishing the extraction of the WIFI information characters;

(1.11) defining a cycle variable Bt, assigning an initial value Bt to be 0, and defining the maximum cycle times Bn as the number of users who send pictures currently;

(1.12) defining a hash table FS to record voting and information of an object to be identified, wherein a key is SF to represent picture information submitted by the object to be identified, another table Cm is defined to represent the number of votes obtained by the object to be identified, the value of the FS table is the hash table Cm, the key of Cm is the name of the object to be identified corresponding to the current sent picture, and the value is the number of votes obtained by the object to be identified;

(1.13) the SF corresponding to Bt exists in FS;

(1.14) taking the FS as a parent table, and adding a newly-built hash table Cmi into the parent table FS;

(1.15) the vote corresponding to Bt exists in Cm;

(1.16) setting the newly-built key as a voting object corresponding to Bt, setting the value to be 1 and storing the value into Cm;

(1.17) converting the acquired WT into different hot spots S, and taking out random hot spots as random sample points S by using a CURE algorithmi

(1.18) random sample points SiDivided into groups denoted as Pi(ii) a Using CURE algorithm to PiClustering, the clustering points marked as GiThe outliers are marked as Oi

(1.19) recording the value of +1 or-1 of the corresponding vote in Cm as H according to the outlier and the clustering point, and defining the total number of votes as H1;

(1.20) the ratio of the number of votes obtained H to the total number of votes obtained H1 is recorded as SC1

(1.21) if SC1<And omega, judging that the position verification fails, namely that the network picture information submitted by the current object to be recognized is not matched with the network picture information submitted by other objects to be recognized, wherein omega is a network picture information similarity threshold set according to the verification total number and the vote number.

Further, the step (2) specifically includes the following steps:

(2.1) defining a face detection target object, and establishing a table for storing object information to be recognized and face information, and recording the table as Fa; define the function len (Fa) as the length of the set Fa, let Fa be { Fa }1,Fa2,…,FasWherein, FasRepresents the S-th image in Fa, S belongs to [1, len (Fa)];

(2.2) defining a loop variable j1 for traversing Fa, j1 ∈ [1, len (Fa) ], j1 having an initial value of 1;

(2.3) traversing Fa, if j1 is less than or equal to len (Fa), jumping to the step (2.4), and if not, ending the traversal Fa, and jumping to the step (2.21);

(2.4) processing Fa by using a Haar characteristic;

(2.5) loading an Adaboost classifier, detecting and segmenting Fa, and circularly detecting the face of the object;

(2.6) defining a current key frame face obtaining flag state d _ flag, wherein when the d _ flag is 1, the object detects the face, and when the d _ flag is 0, the object does not detect the face;

(2.7) if d _ flag is 1, jumping to step (2.8), otherwise, jumping to step (2.17);

(2.8) to the face area FafCarrying out normalization processing to obtain a face normalization region F;

(2.9) extracting face LBP (local binary pattern) characteristics from the face normalization region F by using an LBP characteristic operator to obtain a face characteristic histogram Ff

(2.10) if the system already detects the target object, jumping to the step (2.11), otherwise, jumping to (2.16);

(2.11) inputting a detection image G, and respectively calculating a human face feature histogram G of the detection image G by using the chi-square distancef

(2.12) calculating the face feature histogram set F by using the chi-square distancef={F1 f,F2 f,…,Fn f,…FN fNormalizing the distance of each face feature histogram to obtain a face feature distance set DISf={dis1 f,dis2 f,…,disn f,…disN f};disn fA face feature histogram G representing the detected image GfAnd Fa face feature histogram FafThe face feature distance of (1);

(2.13) set of distances to face features DISf={dis1 f,dis2 f,…,disn f,…disN fCarrying out self-adaptive weighted fusion processing, arranging the fused characteristic distances in ascending order to obtain an optimal characteristic distance set DISopt

(2.14) if the optimal feature distance set DISoptIf any element in the list is larger than the set distance threshold value, jumping to the step (2.16), otherwise, jumping to the step (2.15);

(2.15) successfully identifying, and returning the optimal characteristic distance set DISoptIdentity information of a person corresponding to the minimum distance; for the initial feature distance set { dis1 Wf,Wp,dis2 Wf,Wp,…,disn Wf,Wp,…disN Wf,WpSorting in ascending order, and calculating the Mean value Mean (Y) of the first Y characteristic distances and the Mean value Mean (N-Y) from the Y +1 th characteristic distance to the Nth characteristic distance; 1<=Y<=N;

Self-adaption reliability judgment is carried out by using the formula (1) to obtain a similarity score delta ═ Mean (N) -Mean (N-Y);

(2.16) creating new detection targets, creating a feature list of each detection target, and storing the feature list in Fa;

(2.17) if the system already tracks the target object, jumping to the step (2.19), otherwise, jumping to the step (2.20);

(2.18) adding the extracted features to a feature list of each detection target;

(2.19) predicting the position of the next frame of each detected target in the object by using a Kalman observer, and clearing the detector which is not matched with the target for a long time;

(2.20) i1 ═ i1+1, go to step (2.3);

(2.21) obtaining a video frame face position set Fa ═ { Fa ═ Fa1,Fa2,…,Fas} feature similarity score SC2Wherein, FasShowing the S-th image in Fa.

Further, the step (3) specifically includes the following steps:

(3.1) input Speech Signal data set S3

(3.2) carrying out pre-emphasis and framing processing on the voice signals; windowing, Fourier transform, Mel frequency filter bank filtering and discrete cosine transform are carried out on each frame of voice signals after framing processing, MFCC characteristic parameters are obtained, and an MFCC sequence is obtained;

(3.3) training the GMM Gaussian mixture model by using the MFCC sequence to obtain a characteristic parameter sequence X of the GMM Gaussian mixture model3

(3.4) Framing the speech signal, dividing it into T sections, calculating MFCC sequence, denoted as Y, for each section of speech signalt

(3.5) for the voiceprint feature vector sequence Yt={Y1,Y2,Y3,…,YNProcessing to obtain the characteristic parameter lambda of GMM Gaussian mixture model so as to ensure that the characteristic vector sequence YtThe likelihood probability of (2) is maximum;

(3.6) all T sections of MFCC sequence YtThe user voiceprint characteristic sequence Y is obtained by tandem connectionaWill sequence YaInputting GMM Gaussian mixture model to calculate posterior probability to obtain score SC of voiceprint similarity3

Further, the step (4) specifically includes the following steps:

(4.1) using the human face, the voiceprint recognition data and the hot spot data as criterion layers C1, C2 and C3;

(4.2) defining the check-in results as K1, K2 and K3 as scheme layers respectively;

(4.3) defining the maximum final check-in rate to be O as a target layer;

(4.4) defining a judgment matrix scaling method, wherein the scaling method is used for comparing two factors and determining the importance degree of the two factors;

(4.5) aiming at the criterion layer, the scheme layer and the target layer, establishing a judgment matrix by using an analytic hierarchy process, calculating a maximum characteristic root lambda max, carrying out normalization processing on the lambda max, recording the normalized root lambda max as Nor, and calculating a consistency ratio CR 1;

(4.6) if CR1<0.1, then go to step (4.5), otherwise go to step (4.7);

(4.7) carrying out overall hierarchical ordering on the alignment measurement layer and the target layer to calculate a consistency ratio CR 2;

(4.8) carrying out overall hierarchical ordering on the alignment measurement layer and the scheme layer to calculate a consistency ratio CR 3;

(4.9) if CR1<0.1 then step (4.8) is entered, otherwise step (4.7) is entered.

(4.10) obtaining the weight value N according to the decisions of CR2 and CR 3.

Further, the step (5) specifically includes the following steps:

(5.1) according to the weight value N obtained in the step (4), carrying out weighted fusion on the face, the voiceprint and the hot spot sign-in information to obtain a final fusion result, and marking the final fusion result as A;

(5.2) defining table names Sid, name, swift, Sage, Sface and SFU of the database of the attendance system as ID, name, wireless network list picture, age, face picture, feature tag in voice and attendance data fusion table of a single object to be identified respectively, and meeting St { Sid, name, swift, Sage, Sface and SFU };

(5.3) defining a cycle variable St, giving an initial value St as 0, and defining the maximum cycle number Sn as the number of the objects to be identified of the current sent picture;

(5.4) if St < Sn then go to step (5.5) otherwise go to step (5.11);

(5.5) creating an attendance data fusion table SFU;

(5.6) fusing the weighted value N and the corresponding characteristic value and writing the fused value into an attendance data fusion table SFU;

(5.7) calculating the number of votes SC obtained in the steps (1), (2) and (3)1Similarity score SC2Voice print similarity score SC3Writing the data into a table swift, Sface, Svoice;

(5.8) setting the number of votes SC to be counted1Similarity score SC2Voice print similarity score SC3If A is the fusion threshold of>δ; skipping to the step (5.9), otherwise, skipping to the step (5.10);

(5.9) marking the check-in result of the object to be identified as successful to be written into the database table SFU;

(5.10) marking the check-in result of the object to be identified as an error and writing the error into a database table SFU;

and (5.11) outputting information in the database table St through a webpage end, and generating different sign-in tables according to the fused sign-in result and the characteristic sign-in result for downloading by a user.

Has the advantages that:

compared with the prior art, the invention has the following remarkable advantages: 1. and data fusion identification based on the block chain and biological multi-feature is realized, data fusion is carried out on the biological feature similarity value based on the biological multi-feature identification data and the ANP neural network, and the condition that the check-in person is checked in without the attendance is avoided by using decentralization in the block chain, so that fusion identification of multi-source data is realized. 2. The limitation of single biological feature recognition is changed, the improved WIFI sign-in recognition technology is combined, a user sign-in information result label with higher accuracy can be effectively obtained, the user sign-in recognition result is more accurate under the condition of multi-biological feature recognition, and the use value of face and voiceprint recognition in a target scene is increased.

Drawings

FIG. 1 is a flow chart of an identity recognition method based on blockchain and biological multi-feature recognition and multi-source data fusion according to an embodiment of the present invention;

fig. 2 is a flowchart illustrating WIFI signal picture text extraction and similarity score thereof according to an embodiment of the present invention;

FIG. 3 is a flow chart of a face recognition subsystem according to an embodiment of the present invention;

FIG. 4 is a flowchart of normalization, feature extraction, and face similarity score of a face picture according to an embodiment of the present invention;

FIG. 5 is a flow chart of the pre-processing, feature extraction, and voiceprint similarity score of a speech signal according to an embodiment of the present invention;

FIG. 6 is a flowchart illustrating a process of fusing voiceprint and face similarity scores in FIGS. 2 and 3 according to an embodiment of the present invention;

fig. 7 is a flowchart of a system for fusing feature data and displaying the feature data through a web page according to an embodiment of the present invention.

Detailed Description

The identity identification method based on the block chain, biological multi-feature identification and multi-source data fusion combines the technical means of the block chain technology, the GMM Gaussian mixture model, the ANP algorithm, the CURE clustering algorithm and the like, performs data fusion through the identification similarity value of the multi-biological features of the user and uses a decentralized identification mode based on the block chain to prevent the user from not really participating in signing, thereby really enabling the object to be identified to participate in signing in, increasing the data use value of the biological multi-features in a target scene and improving the credibility of signing in.

Block chaining techniques: blockchains are a term of art in information technology. In essence, it is a shared database with data or information stored therein having characteristics of decentralization, openness, independence, security, and anonymity. The blockchain technology is a brand new distributed infrastructure and computing paradigm that utilizes blockchain data structures to verify and store data, utilizes distributed node consensus algorithms to generate and update data, cryptographically secures data transmission and access, and utilizes intelligent contracts composed of automated script code to program and manipulate data. The problems of cross-subject cooperation of business development, low-cost trust establishment and the like can be properly solved through the block chain. Based on the characteristics, the text characteristics of the mobile phone wireless network list pictures are extracted and compared, and each user network list is used as a data source. The block chain core technology comprises a distributed account book, asymmetric encryption, a consensus mechanism and an intelligent contract.

GMM Gaussian mixture model: the gaussian model is a model formed based on a gaussian probability density function (normal distribution curve) by accurately quantizing an object using the gaussian probability density function (normal distribution curve) and decomposing one object into a plurality of objects. The invention preprocesses the audio frequency by the prior general technology, obtains the characteristic parameters of the voiceprint by the GMM Gaussian mixture model so as to generate a fusion judgment template, and then performs fusion judgment on the information of the object to be recognized by training calculation and the like. GMMs have achieved good results in the fields of numerical approximation, speech recognition, image classification, image denoising, image reconstruction, fault diagnosis, video analysis, mail filtering, density estimation, target recognition and tracking, etc.

ANP algorithm: ANP first divides system elements into two major parts: the first part, referred to as the control factor layer, includes problem objectives and decision criteria. All decision criteria are considered independent of each other and are only governed by the target element. There may be no decision criteria in the control factors, but at least one goal. The weight of each criterion in the control layer can be obtained by an AHP method. The second part is a network layer, which is a C internal mutual influence network structure composed of all element groups controlled by the control layer, and is composed of all elements controlled by the control layer, wherein the elements are interdependent and mutually controlled, the elements and the hierarchies are not internally independent, and each criterion in the hierarchical structure is not a simple internally independent element but a mutually dependent and feedback network structure.

CURE clustering algorithm: the CURE adopts a novel hierarchical clustering algorithm which is between single chain and group average, overcomes the defects of the two hierarchical clustering algorithms, and can process large data, outliers and data of clusters with non-spherical sizes and non-uniform sizes.

The algorithm first considers each data point as a class and then merges the closest classes until the number of classes is the desired number. However, the differences from the AGNES algorithm are: instead of using all points or using the center point + distance to represent a class, a fixed number of well-distributed points are extracted from each class as representative points for the class, and these representative points (typically 10) are multiplied by an appropriate shrinking factor (typically set between 0.2 and 0.7) to bring them closer to the class center point.

The technical scheme of the invention is further elaborated in detail by combining the specific embodiments of multi-target tracking sign-in attendance of the faces of students and tracking, identifying and processing production items of facial feature information of the students in a class scene of a campus.

As shown in fig. 1 to 7, an identity recognition method based on block chain, biological multi-feature recognition and multi-source data fusion includes the following steps:

1. setting the initial image data set of the acquired wireless network list as W, setting the text data set converted from the network picture as WT, clustering and marking network sample points generated in the WT in the text data set by using a CURE algorithm, and calculating the vote number mark SC1 according to the outlier and the aggregation point.

2. And setting the acquired face initial image data set to be Fa, and identifying the characteristic values of the submitted picture and the data set Fa to acquire a similarity score of SC 2.

3. And inputting voiceprint characteristic information audio S, performing pre-emphasis and framing processing on the voice signal to obtain MFCC characteristic parameters, obtaining a voiceprint through a GMM Gaussian mixture model to obtain a voiceprint similarity score, and marking the score as SC 3.

4. And taking the acquired feature score data and the wireless network hotspot data as input, establishing a pairwise comparison matrix for the feature scores, and performing fusion calculation on the feature values by using an AHP method to obtain an AHP weight, which is marked as N.

5. And fusing the check-in data according to the weight, establishing a data table, encrypting the fusion result, storing the fusion result as a final check-in result, outputting the final check-in result through a webpage end, and generating different check-in tables according to the fusion check-in result and the characteristic check-in result for a user to download.

The detailed procedure for steps 1-5 is as follows:

the step 1 specifically comprises the following steps:

step 1.1: inputting an image data set W, defining a set X as the number of pictures uploaded by students, defining a function len (W) and setting W as { W ═ W }1,W2,…,WMIn which WMRepresents the Mth image in W, M belongs to [1, len (W)];

Step 1.2: defining a cyclic variable i1 for traversing W, i 1E [1, len (W) ], and i1 assigning an initial value of 1;

step 1.3: if i1 is less than or equal to len (W), then step 1.4 is entered, otherwise step 1.10 is entered;

step 1.4: to Si1Denoising to obtain Deno _ Si1

Step 1.5: performing image enhancement processing on the denoised image Deno _ Wi1 to obtain an enhanced image Enhance _ Wi 1;

step 1.6: for enhanced graphLike Enhance _ Wi1Scaling to obtain a scaled image zom _ Wi1

Step 1.7: for the scaled image zom _ Wi1Performing feature extraction to obtain a feature image sha _ Wi1

Step 1.8: characteristic image sha _ W by using character classifieri1Performing character recognition and extracting character information to obtain character information and putting the character information into a WT;

step 1.9: i1 ═ i1+1, go to step 1.3;

step 1.10: and finishing the extraction of the WIFI information characters.

Step 1.11: defining a cycle variable Bt, assigning an initial value Bt to be 0, and defining the maximum cycle times Bn as the number of users who send pictures at present;

step 1.12: defining a Hash table FS to record voting and student information, wherein a key is SF to represent picture information submitted by a student, another table Cm is defined to represent the number of votes obtained by the student, the value of the FS table is Cm, the key of Cm is the name of the student corresponding to the current sent picture, and the value is the number of votes obtained by the student;

step 1.13: the SF corresponding to Bt exists in FS;

step 1.14: newly building a hash table Cmi and adding a father table FS;

step 1.15: the vote corresponding to Bt exists in Cm;

step 1.16: setting the newly-built key as a voting object corresponding to Bt, setting the value as 1 and storing the value into Cm;

step 1.17: converting the obtained WT into different hot spots S, and taking out random hot spots as random sample points Si by using a CURE algorithm;

step 1.18: dividing the random sample points Si into a set of Pi(ii) a P pair by using CURE clustering algorithmiClustering is carried out, and the clustered points are marked as GiThe outliers are marked as Oi

Step 1.19: recording the value of +1 or-1 of the corresponding vote in Cm as H according to the outlier and the clustering point, and defining the total vote number as H1;

step 1.20: the ratio of the number of votes obtained H to the total number of votes obtained H1 is recorded as SC 1;

step 1.21: if the SC1< ω, it is determined that the location verification fails, that is, the network picture information submitted by the current student does not match the network picture information submitted by other students, where ω is a threshold value of similarity of the network picture information set according to the total verification number and the number of votes.

The step 2 specifically comprises the following steps:

step 2.1: defining a face detection target object, and establishing a table for storing student information and face information, and recording the table as Fa; define the length of the function len (Fa) set Fa, let Fa be { Fa }1,Fa2,…,FasWherein, FasRepresents the S-th image in Fa, S belongs to [1, len (Fa)];

Step 2.2: defining a cyclic variable j1 for traversing Fa, j1 e [1, len (Fa) ], and j1 assigning an initial value of 1;

step 2.3: traversing Fa, if j1 is less than or equal to len (Fa), jumping to the step (2.4), otherwise, ending the traversal Fa, and jumping to the step (2.21);

step 2.4: processing Fa by using Haar characteristics;

step 2.5: loading an Adaboost classifier, detecting and segmenting Fa, and circularly detecting the face of the object;

step 2.6: defining a current key frame face obtaining flag state d _ flag, wherein when the d _ flag is 1, the object detects the face, and when the d _ flag is 0, the object does not detect the face;

step 2.7: if d _ flag is equal to 1, jumping to step 2.8, otherwise, jumping to step 2.17;

step 2.8: to the face area FafCarrying out normalization processing to obtain a first face normalization region F;

step 2.9: extracting face LBP characteristics from the face normalization region F by using an LBP characteristic operator to obtain a face characteristic histogram Ff

Step 2.10: if the system has detected the target object, then jump to step 2.11, otherwise jump to 2.16;

step 2.11: respectively calculating a human face feature histogram G of an input detection image G by using chi-square distancef

Step 2.12: calculating the face feature histogram set by using the chi-square distance: ff={F1 f,F2 f,…,Fn f,…FN fNormalizing the distance of each face feature histogram to obtain a face feature distance set DISf={dis1 f,dis2 f,…,disn f,…disN f};disn fA face feature histogram G representing the detected image GfAnd Fa face feature histogram FafThe face feature distance of (1);

step 2.13: set of distance to face features DISf={dis1 f,dis2 f,…,disn f,…disN fCarrying out self-adaptive weighted fusion processing, arranging the fused characteristic distances in ascending order to obtain an optimal characteristic distance set DISopt

Step 2.14: if the optimal feature distance set DISoptIf any element in the list is larger than the set distance threshold value, skipping to step 2.16, otherwise skipping to 2.15;

step 2.15: successfully identifying and returning the optimal characteristic distance set DISoptIdentity information of a person corresponding to the minimum distance; for the initial feature distance set

{dis1 Wf,Wp,dis2 Wf,Wp,…,disn Wf,Wp,…disN Wf,WpSorting in ascending order, and calculating the Mean value Mean (Y) of the first Y characteristic distances and the Mean value Mean (N-Y) from the Y +1 th characteristic distance to the Nth characteristic distance; 1<=Y<=N;

Self-adaption reliable judgment is carried out by using the formula (1) to obtain a similarity score delta:

δ=Mean(N)-Mean(N-Y) (1)

step 2.16: creating new detection targets, creating a feature list of each detection target, and storing the feature list in Fa;

step 2.17: if the system already tracks the target object, go to step 2.19, otherwise go to step 2.20;

step 2.18: adding the extracted features to a feature list of each detection target;

step 2.19: predicting the position of the next frame of each detection target in the object by using a Kalman observer, and clearing a detector which is not matched with the target for a long time;

step 2.20: the variable i1 is increased by 1, i1 ═ i1+1, and the process proceeds to step 2.3;

step 2.21: obtaining a video frame face position set Fa ═ { Fa ═ Fa1,Fa2,…,FaMF, wherein FasShowing the S-th image in Fa.

The step 3 specifically comprises the following steps:

step 3.1: inputting a speech signal data set S3;

step 3.2: carrying out pre-emphasis and framing processing on a voice signal; windowing, Fourier transform, Mel frequency filter bank filtering and discrete cosine transform are carried out on each frame of voice signals after framing processing, MFCC characteristic parameters and the like are obtained, and an MFCC sequence is obtained;

step 3.3: training a GMM Gaussian mixture model by using the MFCC sequence to obtain a characteristic parameter sequence X3 of the GMM Gaussian mixture model;

step 3.4: performing frame processing on a voice signal, dividing the voice signal into T sections, and calculating an MFCC sequence of each section of the voice signal and recording the sequence as Yt;

step 3.5: processing the voiceprint feature vector sequence Yt { Y1, Y2, Y3, …, YN } to obtain a feature parameter lambda of the GMM Gaussian mixture model, so that the likelihood probability of the feature vector sequence Yt is maximum;

step 3.6: all the T sections of MFCC sequences Yt are connected in series to obtain a user voiceprint characteristic sequence Ya, the sequence Ya is input into a GMM Gaussian mixture model to calculate posterior probability, and a voiceprint similarity score SC3 is obtained

The step 4 specifically comprises the following steps:

step 4.1: taking the face, the voiceprint recognition data and the hot spot data as criterion layers C1, C2 and C3;

step 4.2: defining the check-in results as K1, K2 and K3 as scheme layers respectively;

step 4.3: defining the highest final check-in rate as O as a target layer;

step 4.4: the definition decision matrix scaling method is noted as i1-i9(ii) a The specific meanings are shown in table 1;

step 4.5: establishing judgment matrixes for different layers by using an analytic hierarchy process to calculate a maximum characteristic root, carrying out normalization processing, recording the normalized characteristic root as Nor, and calculating a consistency ratio CR 1;

step 4.6: step 4.5 if CR1<0.1, otherwise step 4.7;

step 4.7: carrying out overall hierarchical ordering on the alignment measurement layer and the target layer to calculate a consistency ratio CR 2;

step 4.8: carrying out overall hierarchical ordering on the scheme layer and the scheme layer to calculate a consistency ratio CR 3;

step 4.9: the definition decision matrix scaling method is noted as ijWherein j ∈ [1-9 ]];

Step 4.10: if CR <0.1, entering step 4.8, otherwise entering step 4.2 to reconstruct the judgment matrix;

step 4.11: making a decision according to the total hierarchical ranking consistency ratio to obtain a weight value N;

the step 5 specifically comprises the following steps:

step 5.1: carrying out weighted fusion on the face, the voiceprint and the hot spot sign-in information by using the weighted value N obtained in the step 4 to obtain a final fusion result which is marked as A;

step 5.2: defining table names Sid, name, swift, Sage, Sface, Svoic and SFU of an attendance system database as a schoolwork number, name, wireless network graph, age, face picture, voice and attendance data fusion table of a single student respectively, and meeting St { Sid, name, swift, Sage, Sface, Svoie and SFU };

step 5.3: defining a cycle variable St, assigning an initial value St to be 0, and defining the maximum cycle time Sn as the number of students currently sending pictures;

step 5.4: step 5.5 if St < Sn or step 5.11;

step 5.5: newly building an attendance data fusion table SFU;

step 5.6: fusing the weights obtained in the step (4) and the corresponding characteristic values and writing the fused weights into an attendance data fusion table SFU;

step 5.7: calculating the number of votes SC obtained in the steps 1, 2 and 31Similarity score SC2Voice print similarity score SC3Writing the data into a table swift, Sface, Svoice;

step 5.8: setting and calculating the number of votes SC1Similarity score SC2Voice print similarity score SC3If A is the fusion threshold of>δ; skipping to the step (5.9), otherwise, skipping to the step (5.10);

step 5.9: the sign of the student sign-in result is written into the database table SFU successfully;

step 5.10: the sign of the student sign-in result is written into the database table SFU as 'error';

step 5.11: and outputting information in the database table St through a webpage end, and generating different sign-in tables according to the fused sign-in result and the characteristic sign-in result for downloading by a user.

Table 1: variables involved in Steps 1-5

In order to better illustrate the effectiveness of the method, 4956 student face information key frame sequences are subjected to data processing, feature extraction is carried out by using LBP (local binary pattern) to obtain a face feature histogram, and a similarity score is calculated by using a chi-square distance. And (4) extracting the characters of 224 groups of 672 WIFI pictures, and voting to obtain corresponding WIFI similarity scores. Feature extraction is carried out on 64 groups of voice signals, similarity values are calculated by using a GMM Gaussian mixture model, and through data fusion of the similarity values, the accuracy of biological feature sign-in is improved, and the accuracy of 98% on sign-in results is achieved.

The invention can be combined with a computer system so as to complete the multi-source data fusion of biological multi-feature recognition.

The invention creatively provides an identity recognition method based on a block chain, biological multi-feature recognition and multi-source data fusion, and a biological multi-feature sign-in recognition result under a WIFI environment is obtained through multiple experiments.

The invention provides an identification method based on a block chain, biological multi-feature identification and multi-source data fusion, which can be used for identifying the features of biological voiceprints and human faces in a WIFI environment and fusing the data of identification results.

The above description is only an example of the present invention and is not intended to limit the present invention. All equivalents which come within the spirit of the invention are therefore intended to be embraced therein. Details not described herein are well within the skill of those in the art.

24页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种遥感图像的匹配方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!