System and method for biometric identification

文档序号:1432473 发布日期:2020-03-17 浏览:8次 中文

阅读说明:本技术 用于生物特征识别的系统和方法 (System and method for biometric identification ) 是由 S·斯特赖特 于 2018-05-07 设计创作,主要内容包括:计算机实现的系统和方法,用于将加密生物特征输入记录与至少一个存储的加密生物特征记录进行匹配,而无需对所述输入记录和所述至少一个存储的记录进行数据解密。(A computer-implemented system and method for matching an encrypted biometric input record with at least one stored encrypted biometric record without data decryption of the input record and the at least one stored record.)

1. A computer-implemented method for matching an encrypted biometric input record with at least one stored encrypted biometric record without data decryption of the input record and the at least one stored record, the method comprising:

providing an initial biometric vector to a neural network, wherein the neural network converts the initial biometric vector to a Euclidean measurable feature vector;

storing the Euclidean measurable feature vector in a memory with other Euclidean measurable feature vectors;

receiving, from a mobile computing device over a data communications network, a current biometric vector representing the encrypted biometric input record;

providing the current biometric vector to the neural network, wherein the neural network converts the current biometric vector to a current Euclidean measurable feature vector;

searching at least some of the stored Euclidean measurable feature vectors of a portion of the data store using the current Euclidean measurable feature vector,

wherein the encrypted biometric input record is matched to at least one encrypted biometric record in an encrypted space based on a calculated absolute distance between the current Euclidean measurable feature vector and the operation of each respective Euclidean measurable feature vector in the portion of the memory.

2. The method of claim 1, further comprising:

classifying the Euclidean measurable feature vectors; and/or

Classifying the current Euclidean measurable feature vector,

wherein the classifying is performed at least in part using one or more distance functions.

3. A method according to claim 2, wherein the euclidean measurable features and/or the classification of the current euclidean measurable feature vector are returned to floating point values and the absolute distance between each floating point and its mean is calculated using the frobenius algorithm.

4. The method of claim 2, wherein the search is performed in Order log (n) time.

5. The method of claim 2, further comprising:

classifying the Euclidean measurable biometric vector using a Flobenius algorithm;

traversing said hierarchy of classified Euclidean measurable biometric vectors in Order log (n) time; and

identifying that the respective Euclidean measurable biometric vector is the current Euclidean measurable feature vector.

6. The method of claim 1, further comprising:

identifying a plurality of floating point values for each respective euclidean measurable biometric vector; and

using the bitmap to eliminate from the absolute distance calculation any of the plurality of values that is not present in each vector.

7. The method of claim 1, further comprising:

identifying a plurality of floating point values for each respective euclidean measurable biometric vector; and

a sliding scale of importance is defined based on the number of vectors in which a corresponding one of the floating point values occurs.

8. The method of claim 1, wherein the neural network is configured with various convolutional layers, as well as a linear rectification function ReLU and pooling nodes.

9. The method of claim 1, wherein the neural network is configured to use pooling in the form of non-linear down-sampling,

and further wherein one or more pooling nodes progressively reduce the spatial size of the represented euclidean measurable feature vectors to reduce the amount of parameters and computations in the neural network.

10. The method of claim 8, further comprising:

calculating, for each of a plurality of stored euclidean measurable feature vectors, a relative positional difference between the average face vector and the respective euclidean measurable feature vector;

squaring the relative position difference;

summing the values; and

the square root is calculated.

11. The method of claim 1, wherein the performance of the neural network is determined according to a cost function, wherein the number of layers given in spatial size of the output volume is calculated according to the input volume size W, the layer neuron' S nuclear field size K, the step size S to which the layer applies, and the amount of zero padding P used on the box border.

12. The method of claim 1, wherein the neural network transforms the initial and current biometric vectors according to a matrix multiplication for each respective layer and uses a euclidean distance algorithm based on a euclidean cost function.

13. A computer-implemented system for matching an encrypted biometric input record with at least one stored encrypted biometric record without data decryption of the input and the at least one stored record, the system comprising:

one or more processors and a computer-readable medium, wherein the one or more processors are configured to interact with the computer-readable medium to perform operations comprising:

providing an initial biometric vector to a neural network, wherein the neural network converts the initial biometric vector to a Euclidean measurable feature vector;

storing the Euclidean measurable feature vector in a memory with other Euclidean measurable feature vectors;

receiving, from a mobile computing device over a data communications network, a current biometric vector representing the encrypted biometric input record;

providing the current biometric vector to the neural network, wherein the neural network converts the current biometric vector to a current Euclidean measurable feature vector;

searching at least some stored Euclidean measurable feature vectors in a portion of the data store using the current Euclidean measurable feature vector,

wherein the encrypted biometric input record is matched to at least one encrypted biometric record in an encrypted space based on a calculated absolute distance between the current Euclidean measurable feature vector and the operation of each respective Euclidean measurable feature vector in the portion of the memory.

14. The system of claim 13, wherein the one or more processors are further configured to interact with the computer-readable medium to perform operations comprising:

classifying the Euclidean measurable feature vectors; and/or

Classifying the current Euclidean measurable feature vector,

wherein the classifying is performed at least in part using one or more distance functions.

15. The system of claim 14, wherein a floating point value is returned for the classification of the euclidean measurable feature and/or the current euclidean measurable feature vector, and the absolute distance between each floating point and its mean is calculated using the frobenius algorithm.

16. The system of claim 14, wherein the search is conducted in Orderlog (n) time.

17. The system of claim 14, wherein the one or more processors are configured to interact with the computer-readable medium to perform operations comprising:

classifying the Euclidean measurable biometric vector using a Flobenius algorithm;

traversing said hierarchy of classified Euclidean measurable biometric vectors in Order log (n) time; and

identifying that the respective Euclidean measurable biometric vector is the current Euclidean measurable feature vector.

18. The system of claim 13, wherein the one or more processors are configured to interact with the computer-readable medium to perform operations comprising:

identifying a plurality of floating point values for each respective euclidean measurable biometric vector; and

using the bitmap to eliminate from the absolute distance calculation any of the plurality of values that is not present in each vector.

19. The system of claim 13, wherein the one or more processors are configured to interact with the computer-readable medium to perform operations comprising:

identifying a plurality of floating point values for each respective euclidean measurable biometric vector; and

a sliding scale of importance is defined based on the number of vectors in which a corresponding one of the floating point values occurs.

20. The system of claim 13, wherein the neural network is configured with various convolutional layers, as well as a linear rectification function ReLU and pooling nodes.

21. The system of claim 13, wherein the neural network is configured to use pooling in the form of non-linear down-sampling,

and further wherein one or more pooling nodes gradually decrease the spatial size of the represented euclidean measurable feature vectors to reduce the amount of parameters and computations in the neural network.

22. The system of claim 20, wherein the one or more processors are configured to interact with the computer-readable medium to perform operations comprising:

calculating, for each of a plurality of stored euclidean measurable feature vectors, a relative positional difference between the average face vector and the respective euclidean measurable feature vector;

squaring the relative position difference;

summing the values; and

the square root is calculated.

23. The system of claim 13, wherein the performance of the neural network is determined according to a cost function, wherein the number of layers given in the spatial size of the output volume is calculated according to the input volume size W, the layer neuron' S nuclear field size K, the step size S applied by the layers, and the amount of zero padding P used on the box edge.

24. The system of claim 13, wherein the neural network transforms the initial and current biometric vectors according to a matrix multiplication for each respective layer and uses a euclidean distance algorithm based on a euclidean cost function.

Technical Field

The present invention relates generally to systems and methods for acquiring and characterizing biometric features, and in particular, to systems and methods for acquiring and characterizing biometric features for the purpose of identifying or authenticating a user.

Background

Various types of information continue to be stored and accessed remotely, for example, on storage devices accessible through a data communications network. For example, many people and companies store and access financial information, health and medical information, goods and services information, purchasing information, entertainment information, multimedia information over the internet or other communication networks. In addition to accessing information, the user may also effect a money transfer (e.g., purchase, transfer, sale, etc.). In a typical scenario, a user registers for access information and then submits a username and password to "log in" and access the information. Ensuring access to (or from) such information and data stored in data/communication networks remains a paramount concern.

Convenience has driven consumers to move to biometric-based access management solutions. It is believed that most smartphone users prefer to use fingerprints instead of passwords, and many people prefer to use eye recognition instead of fingerprint recognition. Biometric technology (Biometrics) is increasingly becoming the preferred and convenient method for identity detection and verification, as well as identity authentication.

Transmission-level encryption techniques provide relatively strong protection for the transmission of various types of data, including biometric data, and support for confidentiality, assurance, and non-repudiation requirements. Standards such as IEEE 2410-2016 provide protection against adversaries listening to communications and provide detailed mechanisms for identity authentication based on pre-registered devices and previous identities, including by storing biometrics in encrypted form. This is considered to be a one-to-one case compared to existing encrypted biometric samples and includes steps for transmitting and receiving encrypted biometric data. This one-to-one case is therefore considered an authentication use case, since a given biometric vector and identity can be used as input, and authentication can be performed when the biometric vector matches an existing biometric vector corresponding to the respective identity.

Disclosure of Invention

In one or more implementations, the present application provides computer-implemented systems and methods for matching an encrypted biometric input record with at least one stored encrypted biometric record without data decryption of the input and the at least one stored record. The initial biometric vector is provided to a neural network, and the neural network converts the initial biometric vector to a euclidean measurable feature vector. The euclidean measurable feature vector is stored in memory along with other euclidean measurable feature vectors. In addition, a current biometric vector representing the encrypted biometric input record is received from the mobile computing device over the data communications network and provided to the neural network. The neural network converts the current biometric vector into a current euclidean measurable feature vector. Further, at least some of the stored euclidean measurable feature vectors in the partial data store are searched using the current euclidean measurable feature vector. The encrypted biometric input record is matched to at least one encrypted biometric record in the encrypted space based on the absolute distance calculated between the current euclidean measurable feature vector and the operation of each of the corresponding euclidean measurable feature vectors in the portion of memory.

In one or more implementations, the present application further provides a method of classifying a euclidean measurable feature vector and/or classifying a current euclidean measurable feature vector, wherein the classification is performed, at least in part, using one or more distance functions.

In one or more implementations, floating point values are returned for the class of euclidean measurable features and/or current euclidean measurable feature vectors, and the absolute distance between each floating point and its mean is calculated using the Frobenius algorithm.

In one or more implementations, the search is conducted in Order log (n) time. Classifying a Euclidean measurable biometric vector that traverses (transforming) the structural hierarchy of the classified Euclidean measurable biometric vector using a Frobenius algorithm; and determining that the corresponding euclidean measurable biometric vector is the current euclidean measurable feature vector.

In one or more implementations, the present application provides for identifying a plurality of floating point values for each respective euclidean measurable biometric vector, and using a bitmap to eliminate from absolute distance calculations any of the plurality of values that are not present in each vector.

In one or more implementations, the present application provides for identifying a plurality of floating point values for each respective euclidean measurable biometric vector, and using a bitmap to eliminate from absolute distance calculations any of the plurality of values that are not present in each vector.

In one or more implementations, the present application provides for identifying a plurality of floating point values for each respective euclidean measurable biometric vector; and a sliding scale defining importance based on the number of vectors in which one of the corresponding floating point values occurs.

In one or more implementations, a neural network is configured with various convolutional layers, as well as a linear rectification function (ReLU) and pooling nodes.

In one or more implementations, the neural network is configured to use pooling in the form of non-linear down-sampling, and further wherein the one or more pooling nodes progressively reduce the spatial size of the represented euclidean measurable feature vectors to reduce the amount of parameters and computations in the neural network.

In one or more implementations, the present application provides for calculating, for each of a plurality of stored euclidean measurable feature vectors, a relative positional difference between an average face vector and the respective euclidean measurable feature vector; squaring the relative position difference; summing the values; and calculating the square root.

In one or more implementations, the performance of the neural network is determined according to a cost function, where the number of layers given as the spatial size of the output volume is calculated as a function of the input volume size W, the nuclear field size K of the layer neurons, the step size (stride) S applied by the layers, and the zero padding amount P used at the frame edges.

These and other aspects, features and advantages may be understood from the accompanying description of certain embodiments of the invention, the drawings and the claims.

Drawings

FIG. 1 illustrates an exemplary system for identifying a user in accordance with one or more embodiments;

FIG. 2A is a block diagram showing components and features of an example user computing device, and including various hardware and software components for enabling operation of the system;

FIG. 2B illustrates a number of example modules, e.g., example modules encoded in memory and/or storage, in accordance with one or more embodiments;

fig. 2C is a block diagram showing an example configuration of a system server.

FIG. 3 is a system diagram illustrating aspects of a health monitoring and tracking system in accordance with one or more embodiments;

fig. 4A and 4B illustrate an example of an example neural network in operation, according to an example implementation of the present application;

FIG. 5 shows an example process according to a neural network; and

fig. 6 is a flow diagram illustrating example processing steps according to an implementation.

Detailed Description

Encryption remains a widely popular and effective way to protect information during its transfer. The nature of the information generally determines the level and type of encryption used to protect the information, and in particular, prevents the information from being compromised during delivery. Unfortunately, it is not possible or practical to encrypt all of the stored data at the application level, for example, because of the need to search for the data. At least from a performance perspective, it is not feasible to efficiently search for encrypted data without an exhaustive search process (including decrypting the data on a record-by-record basis).

In particular, personally identifiable information ("PII") requires encryption mechanisms and other policies and procedures for data protection, as various operations on data require decryption of such data for viewing and editing. For example, the Health Insurance Portability and Accountability Act (HIPAA) requires encryption of data during transmission and provides policies for publishing and distributing data. The password strength policy is intended to protect the PII database from leakage, for example in the case of theft. By performing operations such as searching without decryption, the data need not be exposed to potential harm. Biometric data needs to be further protected by procedures and strategies that introduce more mechanisms, including more complex encryption schemes, such as visual encryption.

The present application includes features and functionality for providing encrypted searching beyond one-to-one record matching given reference biometric features and new input records. Further, the present application provides new methods and systems for searching encrypted biometric data without the need for data decryption in a storage medium (e.g., a database, file system, or other persistent mechanism). In addition to one-to-one implementations, the teachings herein also support one-to-many implementations in which a search for encrypted biometric records may be conducted based on newly received biometric records. In this case, an orthogonal set o (n) exhaustive search may be performed in which each record is decrypted and compared. According to one or more embodiments of the present application, an o (log n) solution is provided that does not require decryption and supports location logging without decryption. In this context, usually as the case for identification, a given biometric vector is provided as input, and individual biometrics may be searched to determine whether the biometric is in the database.

In one or more embodiments, the present application provides a polynomial based solution for identification in large encrypted data stores of biometrics, thereby providing a system and mechanism to protect privacy. Furthermore, an initial biometric with a low disorder state is selected. Thereafter, the selection of the data structure provides Orderlog (n) for searching. Thereafter, the algorithm of the intermediate node is selected such that the biometric cannot be found or the hash algorithm cannot be inverted to the original biometric. Thus, embodiments according to the present application provide an end-to-end technique for biometrics, providing identification across databases of thousands (or more) of subjects.

In one or more embodiments, a neural network, which may include a convolutional neural network, is used to process the image and determine the correct cost function, e.g., to represent the performance of the neural network. In one or more embodiments, neural networks are used to process other data, including audio content, such as recordings or representations of human sounds. Although many of the example implementations shown and described herein involve processing one or more image files, those skilled in the art will recognize that the present application is preferably biometric agnostic, and that any suitable biometric may be used in accordance with the teachings herein. Various types of neural networks are suitable for receiving information in various formats and generating feature vectors, such as convolutional neural networks, recurrent neural networks ("RNNs"), or deep machine learning systems.

For one or more implementations, a neural network, which may include a convolutional neural network, is used to process the image and determine the correct cost function, e.g., to represent the performance of the neural network. In one or more embodiments, neural networks are used to process other data, including audio content, such as recordings or representations of human sounds. Although many of the example implementations shown and described herein involve processing one or more image files, those skilled in the art will recognize that the present application is preferably biometric agnostic, and that any suitable biometric may be used in accordance with the teachings herein. Other types of neural networks are suitable for receiving information in various formats and generating feature vectors, such as recurrent neural networks ("RNNs") or deep machine learning.

An exemplary system for identifying a user, which may be configured to interface with a neural network (not shown), is shown in block diagram form in fig. 1. In an exemplary arrangement, a system server 105, a remote computing device 102, and user computing devices 101a and 101b may be included. The system server 105 may be virtually any computing device and/or data processing device capable of communicating with the devices 101a, 101b and other remote computing devices 102, including receiving, transmitting, and storing electronic information and processing requests as further described herein. System server 105, remote computing device 102, and user devices 101a and 101b are intended to represent various forms of computers, such as laptops, desktops, computer workstations, personal digital assistants, servers, blade servers, mainframes, and other computers and/or network-based or cloud-based computing systems.

In one or more implementations, the remote computing device 102 may be associated with a business organization, such as a financial institution, an insurance company, or any entity that maintains a user business account (also referred to as a "transaction account"). Such enterprise organizations provide services to account holders and require authentication of users before granting access to the enterprise systems and services. As a further example, the remote computing device 102 may include a payment network and/or a banking network for processing financial transactions, as understood by those skilled in the art.

The user computing devices 101a, 101b may be any computing device and/or data processing apparatus capable of embodying the systems and/or methods described herein, including but not limited to personal computers, tablet computers, personal digital assistants, mobile electronic devices, cellular or smartphone devices, and the like. Transaction terminal 101b is intended to represent various forms of computing devices, such as workstations, dedicated point-of-sale systems, ATM terminals, personal computers, laptop computers, tablet computers, smart phone devices, personal digital assistants, or other suitable computers that may be used to conduct electronic transactions. As described further herein, the devices 101a, 101b may also be configured to receive user input and to capture and process biometric information, e.g., a digital image of a user.

In one or more implementations, the system server 105 is used to implement rules that govern information access and/or information transfer between computing devices (e.g., devices 101a, 101b) with which users interact and one or more trusted backend servers (e.g., remote computing device 102).

As further described herein, systems and methods for identifying and/or authenticating users may meet the level of security required by enterprise systems by integrating with existing systems (e.g., financial institution's transaction processing and data management systems) using APIs. Thus, the system server 105 need not know whether the underlying system (e.g., the remote computing device 102) is a relational database management system (RDBMS), a search engine, a financial transaction processing system, or the like. Thus, the system and method for facilitating security authentication provides a "point and cut" mechanism to add appropriate security to existing enterprise systems as well as to developing systems. In some implementations, the system architecture is language neutral, allowing REST, JSON, and secure sockets layers to provide a communication interface between various computing devices (e.g., 101a, 101b, 102, and 105). Further, in one or more implementations, the architecture is based on Servlet specifications, an open secure socket layer, Java, JSON, REST, and/or apache solr. Thus, the disclosed system for authenticating users may implement an open standard, allowing significant interoperability.

It should be further understood that although various computing devices and machines are referenced herein including, but not limited to, user devices 101a and 101b, system server 105 and remote computing device 102 are referred to herein as separate/single devices and/or machines. In certain implementations, the referenced devices and machines and their associated and/or accompanying operations, features, and/or functions may be combined or arranged on or otherwise used in any number of devices and/or machines, such as over a network connection or a wired connection, as known to those skilled in the art.

Fig. 2A is a block diagram illustrating components and features of a user computing device 101a, and includes various hardware and software components for enabling the system to operate, including one or more processors 110, memory 120, microphone 125, display 140, camera 145, audio output 155, storage 190, and communication interface 150. Processor 110 is operative to execute client applications in the form of software instructions that may be loaded into memory 120. The processor 110 may be a plurality of processors, a central processing unit CPU, a graphics processing unit GPU, a multi-processor core, or any other type of processor depending on the particular implementation.

Preferably, the processor 110 has access to the memory 120 and/or storage 190, thereby enabling the processor to receive and execute instructions encoded thereon to cause the device and its various hardware components to perform operations for aspects of the exemplary systems and methods disclosed herein. The memory may be, for example, Random Access Memory (RAM) or any other suitable volatile or non-volatile computer-readable storage medium. Memory 190 may take various forms depending on the particular implementation. For example, the memory may contain one or more components or devices, such as a hard drive, a flash memory, a rewritable optical disk, a rewritable magnetic tape, or some combination of the above. Additionally, the memory and/or storage may be fixed or removable.

One or more software modules 130 are encoded in memory 190 and/or memory 120. The software modules 130 may include one or more software programs or applications having computer program code or sets of instructions that are executed in the processor 110. As shown in fig. 2B, one or more of user interface module 170, biometric capture module 172, analysis module 174, enrollment module 176, database module 178, authentication module 180, and communication module 182 may be included in software module 130 executed by processor 110. Such computer program code or instructions configure the processor 110 to perform the operations of the systems and methods disclosed herein and may be written in any combination of one or more programming languages.

The program code may execute entirely on the user computing device 101 as a stand-alone device, partly on the user computing device 101, partly on the system server 105, or entirely on the system server 105 or another remote computer/device 102. In the latter scenario, the remote computer may be connected to the user computing device 101 through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), a mobile communications network, a cellular network, or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).

As is known to those of ordinary skill in the art, the program code of software module 130 and one or more computer-readable storage devices (e.g., memory 120 and/or memory 190)) may also form a computer program product that may be manufactured and/or distributed in accordance with the present invention.

It should be appreciated that in some demonstrative embodiments, one or more software modules 130 may be downloaded to memory 190 from another device or system over a network via communication interface 150. Additionally, it should be noted that other information and/or data (e.g., data 185) related to the operation of the present systems and methods may also be stored on the memory. In some implementations, such information is stored on a specially allocated encrypted data store to securely store information collected or generated by the processor 110 executing the software module 130. Such data may be encrypted using, for example, a 1024-bit polymorphic cipher, or, depending on the export control, using an AES 256-bit encryption method. Further, encryption may be performed using a remote key (seed) or a local key (seed). As will be appreciated by those skilled in the art, alternative encryption methods may be used, such as SHA 256.

Additionally, data stored on the user computing device 101a and/or the system server 105 may be encrypted using an encryption key generated from biometric information, liveness information, or user computing device information of the user, as further described herein. In some implementations, the foregoing combination can be used to create a complex unique key for a user that can be encrypted on the user computing device using elliptic curve cryptography, preferably at least 384 bits in length. In addition, the key may be used to protect user data stored on the user computing device and/or the system server.

Also preferably stored on memory 190 is database 185. As will be described in greater detail below, the database contains and/or maintains various data items and elements that are used in various operations of the systems and methods for authenticating a user conducting a financial transaction at a transaction terminal. As will be described in greater detail herein, the information stored in database 185 may include, but is not limited to, user profiles. It should be noted that although database 185 is described as being locally configured to user computing device 101a, additionally or alternatively, in some implementations, the database and/or various data elements stored therein may be remotely located (e.g., on remote device 102 or system server 105-not shown) and connected to the user computing device over a network in a manner known to those of ordinary skill in the art.

The user interface 115 is also operatively connected to the processor. The interface may be one or more input or output devices, such as switches, buttons, keys, touch screen, microphone, etc., as is understood in the art of electronic computing devices. The user interface 115 is used to facilitate capturing commands from a user, such as on/off commands or user information and settings related to the operation of the system for authenticating the user. For example, the interface is used to facilitate capturing certain information from the user computing device 101, such as personal user information for registering with the system to create a user profile.

The computing device 101a may also include a display 140, the display 140 also being operatively connected to the processor 110. The display includes a screen or any other such presentation device that enables the user computing device to indicate or otherwise provide feedback to the user regarding the operation of the system 100. By way of example, the display may be a digital display, such as a dot matrix display or other two-dimensional display.

As a further example, the interface and display may be integrated into a touch screen display. Thus, the display is also used to display a graphical user interface that can display various data and provide a "form" that includes fields that allow a user to enter information. Touching the touch screen at a location corresponding to the display of the graphical user interface allows a user to interact with the device to input data, change settings, control functions, and the like. Thus, when the touch screen is touched, the user interface 115 communicates the change to the processor 110, and the settings may be changed or user-entered information may be captured and stored in the memory 120 and/or storage 190.

The devices 101a, 101b may also include a camera 145 capable of capturing digital images. The camera may be one or more imaging devices configured to capture images of at least a portion of the user's body, including the user's eyes and/or face, while using the user computing device 101 a. The camera is used to facilitate, by the configured user computing device processor, capture of user images for purposes of image analysis, including identification of biometric features (biometric features) to (biometrically) authenticate a user from an image. The user computing device 101a and/or the camera 145 may also include one or more light or signal emitters (not shown), such as visible light emitters and/or infrared light emitters, and the like. The camera may be integrated into the user computing device, such as a front-facing camera or a back-facing camera incorporating a sensor, such as, but not limited to, a CCD or CMOS sensor. Alternatively, the camera may be external to the user computing device 101 a. One skilled in the art will appreciate possible variations of cameras and light emitters. Additionally, as will be appreciated by those skilled in the art, the user computing device may also include one or more microphones 125 for capturing audio recordings.

An audio output 155 may also be operatively connected to the processor 110. As will be understood by those skilled in the art, the audio output may be integrated into the user computing device 101 or external to the user computing device, and may be any type of speaker system configured to play electronic audio files.

Various hardware devices/sensors 160 may be operatively connected to the processor. The sensor 160 may include: an on-board clock (on-board clock) to track the time of day; a GPS enabled device (GPS enabled device) to determine a location of the user computing device; an accelerometer to track a direction and acceleration of the user computing device; a gravity magnetometer for measuring the earth's magnetic field; a proximity sensor to measure a distance from the user computing device to an object; an RF radiation sensor for measuring radiation; and other devices for capturing information about the environment of the user computing device as will be understood by those skilled in the art.

The communication interface 150 is also operatively connected to the processor 110 and may be any interface that enables communication between the user computing device 101a and external devices, machines, and/or elements. Preferably, the communication interface may include, but is not limited to, a modem, a Network Interface Card (NIC), an integrated network interface, a radio frequency transmitter/receiver (e.g., bluetooth, cellular, NFC), a satellite communication transmitter/receiver, an infrared port, a USB connection, and/or any other such interface for connecting the user computing device to other computing devices and/or communication networks (e.g., private networks and the internet). Such a connection may include a wired connection or a wireless connection (e.g., using 802.11 standards), although it should be understood that the communication interface may be virtually any interface capable of communicating with or from a user computing device.

Fig. 2C is a block diagram showing an exemplary configuration of the system server 105. The system server 105 may include a processor 210, the processor 210 operatively connected to various hardware and software components for enabling the operating system to, for example, authenticate a user in connection with a transaction on a transaction terminal. The processor 210 is operable to execute instructions to perform various operations, including instructions related to user authentication and transaction processing/authorization. Depending on the particular implementation, processor 210 may be a plurality of processors, a multi-processor core, or some other type of processor.

In certain implementations, processor 210 may access memory 220 and/or storage medium 290 to enable processor 210 to receive and execute instructions stored on memory 220 and/or storage 290. Memory 220 may be, for example, Random Access Memory (RAM) or any other suitable volatile or non-volatile computer-readable storage medium. In addition, the memory 220 may be fixed or removable. Memory 290 may take various forms depending on the particular implementation. For example, memory 290 may contain one or more components or devices, such as a hard drive, a flash memory, a rewritable optical disk, a rewritable magnetic tape, or some combination of the above. Memory 290 may also be fixed or removable.

One or more software modules 130 (shown in fig. 2B) may be encoded in storage 290 and/or memory 220. The software modules 130 may include one or more software programs or applications (collectively, "secure authentication server applications") having a set of computer program codes or instructions that are executed in the processor 210. Such computer program code or instructions for carrying out operations for aspects of the systems and methods disclosed herein may be written in any combination of one or more programming languages, as will be appreciated by those skilled in the art. The program code may execute entirely on system server 105, partly on system server 105 and partly on a remote computing device (e.g., remote computing device 102, user computing device 101a, and/or user computing device 101b) as a stand-alone software package, or entirely on such a remote computing device.

Also preferably stored on memory 290 is database 280. As will be described in greater detail below, database 280 contains and/or maintains various data items and elements used in various operations of system 100, including but not limited to user profiles as will be described in greater detail herein. It should be noted that although database 280 is described as being locally configured to computing device 105, in certain implementations, database 280 and/or the various data elements stored therein may be stored in a manner known to those of ordinary skill in the art on a computer readable memory or storage medium that is remotely located and connected to system server 105 via a network (not shown).

A communication interface 250 is also operatively connected to the processor 210. The communication interface may be any interface that enables communication between the system server 105 and external devices, machines, and/or elements. In certain implementations, the communication interface may include, but is not limited to, a modem, a Network Interface Card (NIC), an integrated network interface, a radio frequency transmitter/receiver (e.g., bluetooth, cellular, NFC), a satellite communication transmitter/receiver, an infrared port, a USB connection, and/or any other such interface for connecting the user computing device 150 to other computing devices and/or communication networks (e.g., private networks and the internet). Such a connection may include a wired connection or a wireless connection (e.g., using 802.11 standards), although it is understood that communication interface 255 may be virtually any interface capable of communicating with processor 210 or from processor 210.

With reference to fig. 3-4B and with continuing reference to fig. 1 and 2A-2B, the operation of the system for authenticating a user, as well as the various elements and components described above, will be further understood. The processes shown in fig. 3 and 4 are illustrated from the perspective of the user computing device 101a and the system server 105, however, it should be understood that these processes may be performed in whole or in part by the user computing device 101a, the system server 105, and/or other computing devices (e.g., the remote computing device 102), or any combination thereof.

Fig. 3 is a diagram showing a series of nodes X1, X2, and X3 in a neural network, and further showing an activation function that is the sum of a matrix of weights multiplied by X values, and the activation function producing Y values. The goal is to find the ideal number of neurons and layers in the neural network. The formula used to calculate the number of neurons that "fit" in a given volume is (W-K +2P)/S + 1. For example, the cost function is Euclidean, and the number of layers given as the spatial size of the output volume can be calculated from the input volume size W, the kernel field size K of the convolutional layer neurons, the step size S to which it applies, and the amount of zero padding P used on the box edges.

For example, a neural network is initially trained as a classifier using labeled biometric data. As part of the training process, everyone is stored and images from the training are displayed. After an initial biometric vector ("IBV") is introduced into the neural network, the vector used in layer n-1 can be used as a unique feature vector to identify the initial biometric vector, even in a homomorphic encrypted space. The feature vector is euclidean measurable and encrypted. Furthermore, the feature vector replaces the biometric feature vector. In one or more implementations, the feature vector is a list of 256 floating point numbers, and the biometric is reconstructed from the list of 256 floating point numbers. Thus, the feature vector is encrypted in one direction. The feature vector has euclidean measurable characteristics.

It has been recognized that the present application provides applicability in various vertical markets that may require encrypted searches using biometric input for identification. For example, insurance companies and banks desire a mechanism to identify account holders who have lost his/her account number. The solution provided according to the present application can operate in O (log (n)) time, which is believed to improve existing O (n) algorithms for cryptographic searches on databases of encrypted biometric records.

In one or more implementations, for example, a mathematical transform is employed to provide the partitioning for the linear search. In this case, the transformation will take the biometric feature vector and return the feature vector. Thereafter, the feature vectors are available for direct access to the nodes. The visited nodes may then be linearly scanned to determine their respective identities. For example, a single user may be identified as a function of the nodes accessed as a function of the feature vector.

In one or more implementations, a mathematical function that partitions information (e.g., vectors) associated with a biometric to support a tree structure such as a B + tree may be employed. Alternatively or additionally, information associated with the biometric (e.g., biometric vectors) may be placed through a neural network, which is operable to create an irreversible biometric for matching or proximity.

In connection with implementations employing neural networks, the function of the euclidean distance algorithm is to convert facial biometric vectors or other values into what can be compared. The initial biometric values (templates) are processed through a neural network and feature vectors are generated. Such implementations are useful, for example, in connection with "one-to-one" use cases or one-to-many use cases. In either case, the neural network and the feature vector may be used to perform the comparison in the encrypted space. Additionally, in the one-to-one case, after applying multiple levels of neural networks, the neural networks may operate to perform euclidean costs to access a particular biometric or one or more sets of biometrics. After matrix multiplication for each respective layer, the resulting vector may use a euclidean distance algorithm based on a euclidean cost function, and is generally referred to herein as a feature vector.

The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

Fig. 4A and 4B illustrate an example of an example neural network in operation, according to an example implementation of the present application. The initial image is applied to a neural network. A final softmax is provided showing which defined "bucket" the image belongs to. In each bucket shown, an image of a given individual is provided. For example, when the feature vector of person X is applied as 256 values to the classification function, a floating point number Y is derived. For all values around Y, an individual can be identified as long as the feature vector is euclidean measurable and the classification function is stable.

With continued reference to fig. 4A and 4B, the resulting softmax function shows images of subjects in the training set. Thereafter, the matrix calculations used to create softmax are shown. The two convolution steps preceding softmax are fc7-conv, which has the output of convolution layer 7. The output is a feature vector.

FIG. 5 shows an example process according to a neural network. Various convolutional layers are provided, as well as linear rectification functions (relus) and pooling nodes. In the example shown in fig. 5, the linear rectification function is an activation function suitable for the deep neural network, and is an activation function defined as f (x) max (0, x).

In addition to ReLU, the neural network of the present application may be configured to use pooling in the form of non-linear down-sampling. More specifically, a pooling algorithm, generally referred to herein as maximum pooling (maxporoling), partitions an input image into a set of non-overlapping rectangles, and outputs a maximum value for each such sub-region. Once a feature is found, its exact location may not be as important as the relationship to other features. The function of the pooling layer gradually reduces the size of the represented space to reduce the amount of parameters and computations in the network. Thus, by using convolution, ReLU, and max pooling, a reasonably sized vector of 256 is provided for matching, classification, and comparison. Furthermore, the encrypted, measurable Euclidean Measurable Feature Vectors (EMFV) from the neural network may later be used in a classification algorithm to receive a scalar value that may be compared to feature vectors from other biometric features. In one or more implementations, the present application utilizes a pre-trained model, such as a deep learning model trained on data associated with facial images.

With continued reference to fig. 5, the feature vector is an index for storage. A classification algorithm is employed for the index that returns a floating point number given the input feature vector, which can be used as an index for storage. The classification algorithm relies on high quality average vectors, which helps to create interpersonal distances. The classification algorithm allows for searching and finding registered people within polynomial time.

The aforementioned floating point number allows for an index search of previously stored biometrics. Further, the present application employs a classification algorithm to provide an expected person based on the input feature vector.

In one or more implementations, an algorithm is utilized in which data and matrices are stored to provide a learning model. In this case, the average face vector is compared with the corresponding IBV, and the difference between the average face vector and the IBV is calculated. For example, the Frobenius algorithm may be utilized, where the absolute distance between each floating point and its mean is calculated. Further, the relative position difference between the average face vector and the corresponding IBV is calculated and then squared. Thereafter, all values are added and the square root is calculated. This results in clusters that are relatively close in value, i.e., all calculated Frobenius distances are relatively close.

In one or more implementations, during registration, a plurality of face vectors (0-12) are obtained. These vectors are processed through a neural network and if the value of any processed feature vector is 0, the value will be degraded to (close to 0 or get 0) because it is not important because it is not present in every feature vector.

In operation, each time an image is used, the image degrades, providing more separation within the feature vector. Given 256 non-negative integer feature vectors, the process described herein classifies the vectors into appropriate pools, and each pool is for one and only one different person. If the person cannot be found, then it is assumed that the algorithm will look for nearby people to complete the search.

Thus, the present application uses the euclidean distance pair form x ═ x (x)0,x1,…,x255) Is classified. Unless otherwise stated, xiIs a non-negative integer for all i e 0, 1, …, 255. The distance between two vectors is defined as

Figure BDA0002362765090000151

Given a vector x, order

In operation, U is defined as a set of known vectors and | U | is the number of vectors in U. | U | is assumed to be ∞. Then, the average of all vectors in U is represented by mU=1/|U|∑x∈Ux is given and can be calculated explicitly if | U | is small enough. If | U | is not small enough, then m can be approximatedU. Please note that mUIs not necessarily a non-negative integer.

Now, the division of U is considered as U ═ pijPj. We have experimentally observed that for each PjAll have aj,bjE (0, ∞) such that for all

Figure BDA0002362765090000153

Furthermore, for j ≠ k, [ a ]j,bj]And [ a ]k,bk]Are not intersected. In other words, a band of distances (band) of the mean vector may be used to classify the unknown vector.

Given vector

Figure BDA0002362765090000154

Computing

Figure BDA0002362765090000155

To determine how to uniquely extend the partitioning of U to the partitioning of V-U ∪ y, if for all j,

Figure BDA0002362765090000156

the interval a closest to y is selectedj,bj]And selecting a subset P associated with the intervaljTo include y in the extension of the original partition to the partition of V. If it happens that y is equidistant from two different intervals, then including a subset of y in the partition of y is definitively ambiguous. In this case, the numerical result should be rechecked and at least one of two intervals equidistant from y should be better selected.

In an example operation, 257 images are used for training. In an example training operation, the neural network is a 32-node 8-convolutional layer network. Training assigns appropriate weights to the network. Through this trained network, new faces can be applied to storage or lookup. In either case, the face passes through the neural network and the feature vectors are received on convolutional layer 7.

Using a set of images and applying our neural network, the following EMFVs were received at convolutional layer 7.

An example of one such output is as follows:

300071240220002020010134704148202608750004703201234000000200216302110040835053140903161028102022310420252102200101321101955301040042154105131581747002414403004094003114050121670550460237004130300610100100730500030102300111060181011000039207250001003530005342

the feature vectors are classified as a particular person within O (1) at runtime. Using the algorithm described herein, the range of three individuals was determined. Using the normalized vector, the final range of the image is as follows. For human 1, the normalized distance ranges from 0.85 to 1.12. For Person 2, the range is 1.18 to 1.32. For human 3, the normalized distance ranges from 0.39 to 0.68.

As a practical example, when the subsequent IBV of person 1 is provided to the neural network and classification algorithm, the result is between 0.85 and 1.12. Likewise, subsequent IBV of human 3 produced results between 0.39 and 0.68. These acceptable ranges provide a connection (binding) for each person. Thus, the idea is to contact and no conflict is observed in the connections between people. For small amounts of data, the results are accurate.

Accordingly, the present application provides the following techniques: (1) obtaining a biometric, (2) plaintext biometric matching, (3) encrypting the biometric, (4) performing a euclidean measurable match, and (5) searching using a one-to-many indexing scheme. These are provided in a privacy-preserving and polynomial-time-based manner that is also biometric agnostic and performed as a function of machine learning. The present application provides a general solution to generate euclidean measurable biometric ciphertext, including as a function of a convolutional neural network. This is further provided as a function of a classification algorithm for one-to-many recognition that maximizes privacy and runs between O (1) and O (log (n)) times.

An Initial Biometric Vector (IBV) is received and one or more modules apply a neural network to the IBV. The IBV may be processed on a user computing device or server through a set of matrix operations to create a feature vector. Some or all of the feature vectors may be stored on the client or server. After matrix multiplication across several layers, the IBV returns as a euclidean measurable vector. In one or more implementations, the same action may occur for the Current Biometric Vector (CBV), and the two vectors are matched.

Fig. 6 is a flow diagram illustrating example process steps 600 according to an implementation. It should be appreciated that several of the logical operations described herein are implemented (1) as a sequence of computer implemented acts or program modules running on a communication device and/or (2) as interconnected machine logic circuits or circuit modules within the communication device. The specific implementation is a choice depending on the requirements (e.g., size, energy, consumption, performance, etc.) of the device. Accordingly, the logical operations described herein are referred to variously as operations, structural devices, acts or modules. Various of these operations, structural devices, acts and modules may be implemented in software, in firmware, in special purpose digital logic, and any combination thereof. It should also be appreciated that more or fewer operations may be performed than shown in the figures and described herein. These operations may also be performed in a different order than described herein. The example step 600 shown in fig. 6 provides a computer-implemented method for matching an encrypted biometric input record with at least one stored encrypted biometric record without data decryption of the input and the at least one stored record.

The process begins at step 602 and an initial biometric vector is provided to a neural network, and the neural network converts the initial biometric vector to a euclidean measurable feature vector (step 604). The euclidean measurable feature vector is stored in memory along with other euclidean measurable feature vectors (step 606). In addition, a current biometric vector representing the encrypted biometric input record is received from the mobile computing device over the data communications network and provided to the neural network (step 608). The neural network converts the current biometric vector to a current euclidean measurable feature vector (step 610). In addition, at least some of the stored euclidean measurable feature vectors in the partial data store are searched using the current euclidean measurable feature vector (step 614). The encrypted biometric input record is matched to at least one encrypted biometric record in the encrypted space based on an absolute distance calculation between the current euclidean measurable feature vector and the calculation of each respective euclidean measurable feature vector in the portion of memory (step 616).

The above described subject matter is provided by way of illustration only and should not be construed as limiting. Various modifications and changes may be made to the subject matter described herein without following the example embodiments and applications illustrated and described, and without departing from the true spirit and scope of the present invention, including as set forth in each and any of the following claims.

24页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于在互联网协议多媒体子系统中使用中继用户设备的方法和系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类