Image encryption method, device, medium and electronic equipment

文档序号:172702 发布日期:2021-10-29 浏览:18次 中文

阅读说明:本技术 图像加密方法、装置、介质及电子设备 (Image encryption method, device, medium and electronic equipment ) 是由 杨伟明 郭润增 王少鸣 唐惠忠 于 2021-01-18 设计创作,主要内容包括:本申请的实施例提供了一种图像加密方法、装置、计算机可读介质及电子设备。该图像加密方法包括:通过基于获取到的待处理图像生成其对应的衍生图像,并基于衍生图像与待处理图像之间的关联度确定第一安全因子,以基于第一安全因子和待处理图像的加密参数生成的第二安全因子,联合生成图像安全因子;基于图像安全因子对待处理图像进行加密,生成待处理图像对应的加密信息。本实施例中待处理图像的信息在传输过程中更难被破解和篡改,增加了待处理图像的安全性和保密性。(The embodiment of the application provides an image encryption method and device, a computer readable medium and electronic equipment. The image encryption method comprises the following steps: generating a corresponding derivative image based on the acquired image to be processed, determining a first security factor based on the correlation degree between the derivative image and the image to be processed, and jointly generating an image security factor based on a second security factor generated by the first security factor and the encryption parameter of the image to be processed; and encrypting the image to be processed based on the image security factor to generate encrypted information corresponding to the image to be processed. In the embodiment, the information of the image to be processed is more difficult to crack and tamper in the transmission process, and the safety and the confidentiality of the image to be processed are improved.)

1. An image encryption method, comprising:

acquiring an image to be encrypted;

generating a derivative image associated with the image to be processed based on the image to be processed, and determining a first safety factor based on the association degree between the derivative image and the image to be processed;

generating an image security factor based on the first security factor and a second security factor generated based on the encryption parameter of the image to be processed;

and encrypting the image to be processed based on the image security factor to generate encrypted information corresponding to the image to be processed.

2. The method according to claim 1, wherein generating a derivative image associated with the image to be processed based on the image to be processed, and determining a first security factor based on a degree of association between the derivative image and the image to be processed comprises:

coding the image characteristics of the image to be processed based on a deconvolution network, and generating posterior distribution corresponding to a derivative image of the image to be processed;

decoding the posterior distribution based on a convolution network to generate a condition distribution corresponding to the posterior distribution, wherein the condition distribution is used for representing the association degree between the posterior distribution and the image to be processed;

determining the first safety factor based on the conditional distribution.

3. The method according to claim 2, wherein encoding the image features of the image to be processed based on a deconvolution network to generate a posterior distribution corresponding to a derivative image of the image to be processed comprises:

extracting potential information containing image features from an image to be processed;

and coding the potential information based on the deconvolution network to generate a posterior distribution corresponding to the derivative image.

4. The method of claim 3, wherein encoding the latent information based on the deconvolution network to generate a corresponding posterior distribution of the derivative image comprises:

fully connecting the potential information in the deconvolution network to generate a first feature;

resampling the first features to generate second features;

and carrying out deconvolution-based coding on the second features to generate a posterior distribution corresponding to the derivative image.

5. The method of claim 2, wherein decoding from the a posteriori distributions based on a convolutional network to generate conditional distributions corresponding to the a posteriori distributions comprises:

sampling the posterior distribution based on a reparameterization mode to obtain sampling information;

and inputting the sampling information into a decoder formed by a convolutional network for decoding to generate a condition distribution corresponding to the posterior distribution.

6. The method of claim 5, wherein inputting the sampling information into a decoder comprising a convolutional network for decoding to generate a conditional distribution corresponding to the a posteriori distribution, comprises:

inputting the sampling information into a convolutional layer in the decoder for convolution processing to generate a third characteristic; wherein the decoder comprises at least two convolutional layers;

and inputting the third characteristic into a full-connection layer in the decoder for mapping processing to generate a condition distribution corresponding to the posterior distribution.

7. The method of claim 2, further comprising:

acquiring an image sample to be trained;

inputting image features contained in the image samples into an encoder constructed based on a deconvolution network, and generating posterior distribution containing the image sample features;

inputting the posterior distribution into a decoder constructed based on a convolution network to generate a condition distribution corresponding to the posterior distribution;

calculating a loss value corresponding to the training based on the condition distribution, the posterior distribution and a preset loss function;

and updating parameters in the encoder and parameters in the decoder based on the loss values, and generating the encoder for encoding and the decoder for decoding.

8. The method according to claim 1, wherein generating a derivative image associated with the image to be processed based on the image to be processed, and determining a first security factor based on a degree of association between the derivative image and the image to be processed comprises:

processing an image to be processed based on full connection and deconvolution in a generator to generate a derivative image corresponding to the image to be processed;

and comparing the derivative image with the image to be processed through a discriminator to generate a first safety factor corresponding to the similarity distribution between the derivative image and the image to be processed.

9. The method of claim 8, wherein performing full-join and deconvolution-based processing on the image to be processed in a generator to generate a derivative image corresponding to the image to be processed comprises:

extracting image features from the image to be processed;

performing full-connection processing on the image features in a generator to generate full-connection image features;

and performing deconvolution processing on the fully-connected image characteristics based on the set convolution kernel and step length to generate a derivative image corresponding to the image to be processed.

10. The method according to claim 8, wherein comparing the derived image and the to-be-processed image by a discriminator to generate a first safety factor corresponding to a similarity distribution between the derived image and the to-be-processed image comprises:

determining similarity distribution between the derived image and the image to be processed based on a convolutional neural network in the discriminator;

and generating a first safety factor corresponding to the similarity distribution based on the similarity distribution.

11. The method of claim 8, further comprising:

acquiring a noisy image sample with noise and a clear target image sample;

cleaning the image sample with noise and the target image sample to obtain a cleaned image sample with noise and a cleaned target image sample;

training a set generator based on the cleaned noisy image sample and the cleaned target image sample to obtain a trained generator, wherein the generator is used for generating an image generation result corresponding to the noisy image sample;

and training a set discriminator based on the image generation result output by the generator and the cleaned target image sample to obtain the trained discriminator, wherein the discriminator is used for discriminating the image generation result and the corresponding target image thereof and outputting the discrimination result of whether the image generation result is consistent with the target image sample.

12. The method according to claim 1, wherein the encryption parameters corresponding to the image to be processed comprise at least one of the following: acquiring an acquisition equipment serial number of the image to be processed, a signature version used for signing the image to be processed, a timestamp, a counter and a random character string corresponding to the encrypted information corresponding to the image to be processed;

the method further comprises the following steps:

and splicing the encryption parameters corresponding to the images to be processed based on the set splicing rule to generate the second safety factor.

13. An image encryption apparatus characterized by comprising:

the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an image to be encrypted;

the derivative unit is used for generating a derivative image related to the image to be processed based on the image to be processed and determining a first safety factor based on the degree of association between the derivative image and the image to be processed;

the factor unit is used for generating an image security factor based on the first security factor and a second security factor generated based on the encryption parameter of the image to be processed;

and the encryption unit is used for encrypting the image to be processed based on the image security factor and generating encryption information corresponding to the image to be processed.

14. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out an image encryption method according to any one of claims 1 to 12.

15. An electronic device, comprising:

one or more processors;

storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the image encryption method according to any one of claims 1 to 12.

Technical Field

The present application relates to the field of computer technologies, and in particular, to an image encryption method, an image encryption device, a computer-readable medium, and an electronic device.

Background

In many image processing processes, images need to be encrypted to ensure the security and confidentiality of image information. In the related art, the image is encrypted by the existing public encryption method to obtain the encrypted image, and then the encrypted image is transmitted or verified. The encrypted image obtained in this way is often low in security, and is easily cracked to obtain information in the encrypted image, or possibly modified. Especially in application scenes with high requirements on image security such as face-brushing payment, the security of the face-brushing image cannot be ensured by the encryption mode, so that user identity information used for payment verification in the face-brushing image is revealed, the information security of a user is threatened, and the security and confidentiality of the image information are reduced.

Disclosure of Invention

The embodiment of the application provides an image encryption method, an image encryption device, a computer readable medium and electronic equipment, so that information of an image to be processed is more difficult to crack and tamper in a transmission process at least to a certain extent, and the safety and confidentiality of the image to be processed are improved.

Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.

According to an aspect of an embodiment of the present application, there is provided an image encryption method, including: acquiring an image to be encrypted; generating a derivative image associated with the image to be processed based on the image to be processed, and determining a first safety factor based on the association degree between the derivative image and the image to be processed; generating an image security factor based on the first security factor and a second security factor generated based on the encryption parameter of the image to be processed; and encrypting the image to be processed based on the image security factor to generate encrypted information corresponding to the image to be processed.

According to an aspect of an embodiment of the present application, there is provided an image encryption apparatus including: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an image to be encrypted; the derivative unit is used for generating a derivative image related to the image to be processed based on the image to be processed and determining a first safety factor based on the degree of association between the derivative image and the image to be processed; the factor unit is used for generating an image security factor based on the first security factor and a second security factor generated based on the encryption parameter of the image to be processed; and the encryption unit is used for encrypting the image to be processed based on the image security factor and generating encryption information corresponding to the image to be processed.

In some embodiments of the present application, based on the foregoing scheme, the derivation unit comprises: the posterior unit is used for coding the image characteristics of the image to be processed based on a deconvolution network to generate posterior distribution corresponding to a derivative image of the image to be processed; the condition unit is used for decoding the posterior distribution based on a convolutional network to generate a condition distribution corresponding to the posterior distribution, wherein the condition distribution is used for representing the correlation degree between the posterior distribution and the image to be processed; a first safety factor unit to determine the first safety factor based on the conditional distribution.

In some embodiments of the present application, based on the foregoing scheme, the posterior unit includes: the potential information unit is used for extracting potential information containing image features from the image to be processed; and the coding unit is used for coding the potential information based on the deconvolution network to generate the posterior distribution corresponding to the derivative image.

In some embodiments of the present application, based on the foregoing scheme, the encoding unit is configured to: fully connecting the potential information in the deconvolution network to generate a first feature; resampling the first features to generate second features; and carrying out deconvolution-based coding on the second features to generate a posterior distribution corresponding to the derivative image.

In some embodiments of the present application, based on the foregoing scheme, the condition unit includes: the sampling unit is used for sampling the posterior distribution to obtain sampling information; and the decoding unit is used for inputting the sampling information into a decoder formed by a convolutional network for decoding and generating the condition distribution corresponding to the posterior distribution.

In some embodiments of the present application, based on the foregoing scheme, the sampling unit is configured to: and sampling the posterior distribution based on a reparameterization mode to obtain sampling information.

In some embodiments of the present application, based on the foregoing scheme, the decoding unit is configured to: inputting the sampling information into a convolutional layer in the decoder for convolution processing to generate a third characteristic; wherein the decoder comprises at least two convolutional layers; and inputting the third characteristic into a full-connection layer in the decoder for mapping processing to generate a condition distribution corresponding to the posterior distribution.

In some embodiments of the present application, based on the foregoing scheme, the derivation unit comprises: the device comprises a first generation unit, a second generation unit and a third generation unit, wherein the first generation unit is used for carrying out processing based on full connection and deconvolution on an image to be processed in a generator to generate a derivative image corresponding to the image to be processed; and the second generation unit is used for comparing the derivative image with the image to be processed through a discriminator to generate a first safety factor corresponding to the similarity distribution between the derivative image and the image to be processed.

In some embodiments of the present application, based on the foregoing scheme, the first generating unit includes: the extraction unit is used for extracting image features from the image to be processed; a third generating unit, configured to perform full-join processing on the image features in a generator to generate full-join image features; and the fourth generation unit is used for carrying out deconvolution processing on the fully-connected image characteristics based on the set convolution kernel and the set step length to generate a derivative image corresponding to the image to be processed.

In some embodiments of the present application, based on the foregoing scheme, the image-based encryption apparatus is further configured to: carrying out standardization processing on the fully-connected image features to generate standardized image features; and mapping the standardized image features based on a preset leakage correction linear unit to obtain the corrected fully-connected image features.

In some embodiments of the application, based on the foregoing scheme, the fourth generating unit is configured to perform deconvolution processing on the fully-connected image features at least twice based on convolution kernels and step lengths corresponding to respective deconvolution layers, and generate a derivative image corresponding to the image to be processed.

In some embodiments of the present application, based on the foregoing scheme, the extraction unit is configured to: converting the image to be processed into a gray image; determining an image data set corresponding to the gray level image based on the Bernoulli distribution corresponding to the gray level image; and carrying out standardization processing on the image data set to generate the image characteristics.

In some embodiments of the present application, based on the foregoing scheme, the second generating unit is configured to: determining similarity distribution between the derived image and the image to be processed based on a convolutional neural network in the discriminator; and generating a first safety factor corresponding to the similarity distribution based on the similarity distribution.

In some embodiments of the present application, based on the foregoing scheme, the encryption parameter corresponding to the image to be processed includes at least one of: acquiring an acquisition equipment serial number of the image to be processed, a signature version used for signing the image to be processed, a timestamp, a counter and a random character string corresponding to the encrypted information corresponding to the image to be processed; the image encryption apparatus is further configured to: and splicing the encryption parameters corresponding to the images to be processed based on the set splicing rule to generate the second safety factor.

In some embodiments of the present application, the image encryption apparatus is further configured to: acquiring an image sample to be trained; inputting image features contained in the image samples into an encoder constructed based on a deconvolution network, and generating posterior distribution containing the image sample features; inputting the posterior distribution into a decoder constructed based on a convolution network to generate a condition distribution corresponding to the posterior distribution; calculating a loss value corresponding to the training based on the condition distribution, the posterior distribution and a preset loss function; and updating parameters in the encoder and parameters in the decoder based on the loss values, and generating the encoder for encoding and the decoder for decoding.

In some embodiments of the present application, the image encryption apparatus is further configured to: converting the image sample into a gray level image; determining an image data set corresponding to the gray level image based on the Bernoulli distribution corresponding to the gray level image; and carrying out binarization processing on the image data set to generate the image characteristics.

In some embodiments of the present application, based on the foregoing scheme, the encryption unit is configured to: and based on the image security factor, encrypting the image to be processed through an asymmetric encryption algorithm to generate encrypted information corresponding to the image to be processed.

In some embodiments of the present application, the image encryption apparatus is further configured to: after generating the encrypted information corresponding to the face brushing image, sending the encrypted information to a server so that the server verifies the transaction information of the user based on the encrypted information; and displaying deduction information sent by a server on an interface, wherein the deduction information is information generated by the server based on a verification result of the transaction information.

In some embodiments of the present application, the image encryption apparatus is further configured to: when verification passing information returned by the server is not acquired, the face brushing image of the user is continuously acquired, new encryption information is generated based on the face brushing image and is sent to the server until the verification passing information returned by the server is acquired.

According to an aspect of embodiments of the present application, there is provided a computer-readable medium on which a computer program is stored, the computer program, when executed by a processor, implementing the image encryption method as described in the above embodiments.

According to an aspect of an embodiment of the present application, there is provided an electronic device including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the image encryption method as described in the above embodiments.

According to an aspect of embodiments herein, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the image encryption method provided in the various alternative implementations described above.

In the technical solutions provided by some embodiments of the present application, a derivative image corresponding to an acquired image to be processed is generated based on the acquired image to be processed, a first security factor is determined based on a degree of association between the derivative image and the image to be processed, and an image security factor is generated jointly based on a second security factor generated by the first security factor and an encryption parameter of the image to be processed; and encrypting the image to be processed based on the image security factor to generate encrypted information corresponding to the image to be processed. By means of the mode of generating the safety factor based on the image features in the image to be processed, information of the image to be processed is more difficult to crack and tamper in the transmission process, and safety and confidentiality of the image to be processed are improved.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:

FIG. 1 shows a schematic diagram of an exemplary system architecture to which aspects of embodiments of the present application may be applied;

FIG. 2 schematically illustrates a system architecture diagram in an application scenario of face-brushing payments according to an embodiment of the present application;

FIG. 3 schematically shows a flow diagram of an image encryption method according to an embodiment of the present application;

FIG. 4 schematically illustrates a schematic diagram of generating an image security factor according to an embodiment of the present application;

FIG. 5 schematically shows a flow diagram for training an encoder and decoder according to an embodiment of the present application;

FIG. 6 schematically shows a flow diagram for generating image features according to an embodiment of the present application;

FIG. 7 schematically illustrates a block diagram of neural network model training, according to an embodiment of the present application;

FIG. 8 schematically shows a structural diagram of a neural network model according to an embodiment of the present application;

FIG. 9 schematically shows a data processing diagram of a neural network model according to an embodiment of the present application;

FIG. 10 schematically shows a data processing diagram of an encoder according to an embodiment of the present application;

FIG. 11 schematically shows a data processing diagram of a decoder according to an embodiment of the present application;

FIG. 12 schematically shows a flow chart for generating a posterior distribution according to an embodiment of the present application;

FIG. 13 schematically shows a flow diagram of generating a condition distribution according to an embodiment of the present application;

FIG. 14 schematically illustrates a schematic diagram of generating an image security factor according to an embodiment of the present application;

FIG. 15 schematically shows a schematic diagram of a generator according to an embodiment of the present application;

FIG. 16 schematically shows a schematic diagram of an arbiter according to one embodiment of the present application;

FIG. 17 schematically illustrates a schematic diagram of a brushing data process according to an embodiment of the present application;

FIG. 18 schematically illustrates a schematic of an interface for successful face swipe payment according to one embodiment of the present application;

FIG. 19 schematically illustrates a schematic of an interface for a face-brushing payment failure, according to an embodiment of the present application;

FIG. 20 schematically illustrates a block diagram of an image-based encryption apparatus according to an embodiment of the present application;

FIG. 21 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.

Detailed Description

Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.

Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.

The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.

The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.

Cloud computing (cloud computing) refers to a delivery and use mode of an IT infrastructure, and refers to obtaining required resources in an on-demand and easily-extensible manner through a network; the generalized cloud computing refers to a delivery and use mode of a service, and refers to obtaining a required service in an on-demand and easily-extensible manner through a network. Such services may be IT and software, internet related, or other services. Cloud Computing is a product of development and fusion of traditional computers and Network Technologies, such as Grid Computing (Grid Computing), distributed Computing (distributed Computing), Parallel Computing (Parallel Computing), Utility Computing (Utility Computing), Network Storage (Network Storage Technologies), Virtualization (Virtualization), Load balancing (Load Balance), and the like. With the development of diversification of internet, real-time data stream and connecting equipment and the promotion of demands of search service, social network, mobile commerce, open collaboration and the like, cloud computing is rapidly developed. Different from the prior parallel distributed computing, the generation of cloud computing can promote the revolutionary change of the whole internet mode and the enterprise management mode in concept.

Cloud Security (Cloud Security) refers to a generic term for Security software, hardware, users, organizations, secure Cloud platforms for Cloud-based business model applications. The cloud security integrates emerging technologies and concepts such as parallel processing, grid computing and unknown virus behavior judgment, abnormal monitoring of software behaviors in the network is achieved through a large number of meshed clients, the latest information of trojans and malicious programs in the internet is obtained and sent to the server for automatic analysis and processing, and then the virus and trojan solution is distributed to each client. The main research directions of cloud security include: 1. the cloud computing security mainly researches how to guarantee the security of the cloud and various applications on the cloud, including the security of a cloud computer system, the secure storage and isolation of user data, user access authentication, information transmission security, network attack protection, compliance audit and the like; 2. the cloud of the security infrastructure mainly researches how to adopt cloud computing to newly build and integrate security infrastructure resources and optimize a security protection mechanism, and comprises the steps of constructing a super-large-scale security event and an information acquisition and processing platform through a cloud computing technology, realizing the acquisition and correlation analysis of mass information, and improving the handling control capability and the risk control capability of the security event of the whole network; 3. the cloud security service mainly researches various security services, such as anti-virus services and the like, provided for users based on a cloud computing platform.

In the embodiment of the application, after the image to be processed is obtained, the image to be processed is encrypted, the obtained encrypted information is uploaded to a cloud platform, the encrypted information is verified, corresponding processing such as deduction is performed after the verification is passed, and the security and confidentiality of the image to be processed are guaranteed through the method.

Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making. The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.

Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition. Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.

With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical care, smart customer service, and the like.

The scheme provided by the embodiment of the application relates to the technologies of computer vision, machine learning and the like of artificial intelligence, and is specifically explained by the following embodiments:

fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solution of the embodiments of the present application can be applied.

As shown in fig. 1, the system architecture may include a terminal device, a network 104, and a server 105.

In an embodiment of the present application, the terminal device in this embodiment may be one or more of the smart phone 101, the tablet computer 102, and the portable computer 103 shown in fig. 1, and may of course be a desktop computer, and the like. In addition, a face acquisition device such as the camera 106 may be used.

In one embodiment of the present application, the network 104 is used to provide the medium of a communication link between the terminal device and the server 105. Network 104 may include various connection types, such as wired communication links, wireless communication links, and so forth.

It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.

A user may use a terminal device to interact with the server 105 over the network 104 to receive or send messages or the like. The server 105 may be a server that provides various services. For example, a user generates a derivative image corresponding to an acquired image to be processed by using the terminal device 103 (which may also be the terminal device 101 or 102) based on the acquired image to be processed, and determines a first security factor based on a degree of association between the derivative image and the image to be processed, so as to jointly generate an image security factor based on a second security factor generated by the first security factor and an encryption parameter of the image to be processed; the image to be processed is encrypted based on the image security factor, and then the encrypted information corresponding to the image to be processed is generated and sent to the server 105.

It should be noted that the image encryption method provided in the embodiment of the present application is generally executed by the terminal device 105, and accordingly, the image encryption device is generally disposed in the terminal device 105. The image-based encryption device in the embodiment of the application can be a camera device for scanning human faces.

As shown in fig. 2, in the operation process of the scanning terminal 201, an acquired image to be processed is captured to generate a corresponding derivative image, a first security factor is determined based on a correlation degree between the derivative image and the image to be processed, and an image security factor is generated jointly based on a second security factor generated by the first security factor and an encryption parameter of the image to be processed; the image to be processed is encrypted based on the image security factor, and the encrypted information corresponding to the image to be processed is generated and then sent to the server 202.

According to the scheme, a corresponding derivative image is generated based on the acquired image to be processed, a first safety factor is determined based on the correlation degree between the derivative image and the image to be processed, and an image safety factor is generated jointly based on a second safety factor generated by the first safety factor and the encryption parameter of the image to be processed; and encrypting the image to be processed based on the image security factor to generate encrypted information corresponding to the image to be processed. By means of the mode of generating the safety factor based on the image features in the image to be processed, information of the image to be processed is more difficult to crack and tamper in the transmission process, and safety and confidentiality of the image to be processed are improved.

The implementation details of the technical solution of the embodiment of the present application are set forth in detail below:

fig. 3 shows a flowchart of an image encryption method according to an embodiment of the present application, which may be performed by a server, which may be the terminal device shown in fig. 1. Referring to fig. 3, the image encryption method at least includes steps S310 to S340, which are described in detail as follows:

step S310, acquiring the to-be-processed image to be encrypted.

In an embodiment of the present application, the image to be processed may be an image that needs to be encrypted, such as a face image, a payment code, an authentication code image, and the like. The mode of acquiring the to-be-processed image in this embodiment may be real-time acquisition, such as shooting, or downloading from a network, a database, or a cloud.

Step S320, generating a derivative image associated with the image to be processed based on the image to be processed, and determining a first safety factor based on the association degree between the derivative image and the image to be processed.

In an embodiment of the present application, a derivative image associated with the to-be-processed image is generated based on the to-be-processed image, where the derivative image in this embodiment is used to represent an image that includes image features in the to-be-processed image and is similar to the to-be-processed image. After determining the derived image, determining a first security factor based on a degree of association between the derived image and the image to be processed.

Step S330, generating an image security factor based on the first security factor corresponding to the condition distribution and the second security factor generated based on the encryption parameter of the image to be processed.

As shown in fig. 4, in the present embodiment, after the posterior distribution and the corresponding conditional distribution are generated, the first safety factor is generated based on the conditional distribution. And simultaneously generating a second safety factor based on the encryption parameter of the image to be processed. And then combining the first safety factor and the second safety factor to generate an image safety factor.

Specifically, the encryption parameters corresponding to the image to be processed in this embodiment may be the size of the image to be processed, an encrypted timestamp, an identifier of the device that acquires the image to be processed, and the like, and the second security factor may be generated by splicing these parameters.

In an embodiment of the present application, the encryption parameter corresponding to the image to be processed includes at least one of the following: the method comprises the steps of collecting a collecting equipment serial number of an image to be processed, a signature version used for signing the image to be processed, a timestamp, a counter and a random character string corresponding to the generated encrypted information corresponding to the image to be processed. For example: magic _ num: different original signature string formats are different, and the unified agreement is only needed depending on the application party of the service background; device _ info: collecting a serial number of equipment; sign _ version: a signed version; timing and map: a time stamp; counter: a counter; random: a random string; payload: payload, and the like.

In an embodiment of the present application, based on a set stitching rule, the encryption parameters corresponding to the image to be processed are stitched to generate a second security factor, that is:

{magic_num}{device_info}{sign_version}{timestamp}{counter}{random}

in the embodiment, the first security factor is generated based on the attribute of the image to be processed, the second security factor is generated based on the encryption parameter corresponding to the image to be processed, the image security factor is generated based on the first security factor and the second security factor, the image to be processed is encrypted based on the image security factor, and the security of the image to be processed is improved in a comprehensive encryption mode.

Step S340, encrypting the image to be processed based on the image security factor, and generating the encrypted information corresponding to the image to be processed.

In an embodiment of the present application, after the image security factor is generated. And encrypting the image to be processed based on the image security factor to generate encrypted information corresponding to the image to be processed. Specifically, the encryption method used in this embodiment may be symmetric encryption, asymmetric encryption, or the like.

In the embodiment of the application, the image to be processed is encrypted by the image security factor based on an asymmetric mode, and the encrypted information corresponding to the image to be processed is generated, so that the encrypted information finally used for transmission has higher security, and the image information in the image to be processed is prevented from being leaked or maliciously tampered in the transmission, processing and verification processes.

In an embodiment of the present application, the step S320 includes steps S350 to S370:

and step S350, coding the image characteristics of the image to be processed based on the deconvolution network, and generating posterior distribution corresponding to the derivative image of the image to be processed.

In an embodiment of the present application, an encoder formed by a deconvolution network is preset in the embodiment of the present application, and is configured to encode feature features of an image to be processed based on the deconvolution network in the encoder, and generate a posterior distribution corresponding to a derivative image of the image to be processed.

Specifically, the derived image in this embodiment is used to represent an image generated by an encoder based on image features, and the image features in this embodiment are used to represent potential information of the image to be processed, such as a spectral distribution, a pixel distribution, and the like of each pixel in the image. The posterior distribution in this embodiment is used to represent the distribution of the pixel characteristics corresponding to the image characteristics in the derived image compared to the image characteristics in the image to be processed, and the pixel characteristics in the derived image can be represented based on the posterior distribution.

And S360, decoding the posterior distribution based on the convolutional network to generate a condition distribution corresponding to the posterior distribution, wherein the condition distribution is used for expressing the association degree between the posterior distribution and the image to be processed.

In an embodiment of the present application, a decoder formed by a convolutional network is preset, and is configured to decode, based on the convolutional network in the decoder, a corresponding feature in the posterior distribution, and generate a conditional distribution corresponding to the posterior distribution, so as to represent, by using the conditional distribution, a degree of association between a derivative image corresponding to the posterior distribution and an image to be processed.

Specifically, the condition distribution in the embodiment of the present application corresponds to a degree of association between the derived image and the image to be processed, where the degree of association may be obtained by a similarity between two images, and specifically may be a similarity corresponding to each pixel between the derived image and the image to be processed.

Step S370, determining the first safety factor based on the condition distribution.

After the conditional distribution is generated, the conditional distribution is quantized to generate a first safety factor.

In an embodiment of the present application, the image encryption method further includes steps S510 to S550, which are described in detail as follows:

step S510, an image sample to be trained is obtained.

In one embodiment of the present application, the image samples to be trained include the acquisition of noisy images and the acquisition of sharp images. Illustratively, when the image sample is face data, a certain amount of effective face data is collected, namely the image sample is clear and has good image quality and is used as target sample data; and simultaneously, collecting the face data with the same quantity and poor quality as the noise sample data.

In an embodiment of the present application, the process of acquiring the image sample to be trained in step S510 specifically includes steps S5101 to S5103, which are described in detail as follows:

step S5101, converting the image sample into a gray image;

step S5102, determining an image data set corresponding to the gray level image based on Bernoulli distribution corresponding to the gray level image;

in step S5103, binarization processing is performed on the image data set to generate image features.

In one embodiment of the present application, meaningful features and attributes are acquired, processed, and extracted from the image to be processed using signal data processing techniques. Specifically, an image to be processed is converted into a gray image, then modeling is carried out on each pixel by using Bernoulli distribution, and binarization processing is carried out on a data set to obtain image characteristics corresponding to the image to be processed.

In addition, in the embodiment of the application, after the noisy image sample and the target image sample are acquired, the noisy image sample and the target image sample are cleaned, so that an invalid image in the sample is clear, wherein the invalid image can be an image which is blocked and does not have a human face region, and the noisy image sample and the target image sample after cleaning are obtained.

As shown in fig. 7, in the process of training the encoder and the decoder in this embodiment, in addition to the above collected data and feature engineering, the process of selecting a model, training a model, and evaluating a model is also included, which is described in detail as follows.

As shown in fig. 8 and 9, the encoder constructed by the deconvolution network and the decoder constructed by the convolution network are included in the embodiment of the present application, and the posterior distribution including the features of the image sample is generated by inputting the features included in the image sample into the encoder constructed by the deconvolution network.

Step S520, inputting the image features included in the image sample into an encoder constructed based on the deconvolution network, and generating a posterior distribution including the image sample features.

Illustratively, the deconvolution network in the embodiments of the present application includes at least two deconvolution layers and a compression layer. Optionally, in the encoder in this embodiment, the first layer is an input layer; the second layer is a full connection layer, and the activation function uses relu; the third layer is a resampling layer, and input characteristics of the second layer are reconstructed; the fourth layer is an deconvolution layer, 64 convolution kernels are used, the size of the convolution kernels is 3x3, the step length is 2, the inner distance is SAME, and the activation function is relu; the fifth layer is an deconvolution layer, 32 convolution kernels are used, the size of each convolution kernel is 3x3, the step length is 2, the inner distance is SAME, and the activation function is relu; the sixth layer is a deconvolution layer, 16 convolution kernels are used, the size of the convolution kernels is 3x3, the step length is 2, the inner distance is SAME, and the activation function is relu; the seventh layer is the deconvolution layer, 1 convolution kernel is used, the convolution kernel size is 3x3, the step size is 1, the inner distance is SAME, and the activation function is the linear function f (x).

By characterizing the image as shown in FIG. 10In the input encoder, the encoding result is output at the deconvolution layerAfter which convolution parameters are addedPerforming convolution processing at least twice to generate convolution resultOutputting posterior distribution corresponding to image sample characteristics through classification layerThe encoder in the present embodiment takes potential encoding as input and outputs a posterior probability q (z | x) for observing a conditional distribution p (x | z), i.e., where x and z represent observed values of an image and potential information of the image, respectively.

Step S530, inputting the posterior distribution into a decoder constructed based on the convolutional network, and generating a conditional distribution corresponding to the posterior distribution.

Illustratively, the decoder formed by the convolutional network in the embodiment of the present application includes 3 convolutions and a fully-connected network layer. Specifically, the first layer is an input layer; the second layer is a convolution layer, 16 convolution kernels are used, the size of each convolution kernel is 3x3, the step length is 2, the inner distance is SAME, and the activation function is relu; the third layer is a convolution layer, 32 convolution kernels are used, the size of each convolution kernel is 3x3, the step length is 2, the inner distance is SAME, and the activation function is relu; the fourth layer is a convolution layer, 64 convolution kernels are used, the size of each convolution kernel is 3x3, the step length is 2, the inner distance is SAME, and the activation function is relu; the fifth layer is a full connection layer.

As shown in fig. 11, the image is characterizedIn the input encoder, output the pooling result at the pooling levelAfter which convolution parameters are addedPerforming convolution processing at least twice to generate convolution resultOutputting posterior distribution corresponding to image sample characteristics through classification layerThe decoder in the present embodiment takes the posterior distribution q (z | x) as an input with the observed value, and outputs the conditional distribution p (x | z) corresponding to the posterior distribution.

And step S540, calculating a loss value corresponding to the training based on the condition distribution, the posterior distribution and a preset loss function.

In an embodiment of the present application, after the condition distribution and the posterior distribution are generated, the loss value corresponding to the training is calculated based on the condition distribution and the posterior distribution and a preset loss function. In this embodiment, the model is trained based on the set loss function ELBO in a manner of maximizing the marginal log-likelihood, and the specific loss function is:

where x and z represent observed values of an image and potential information of the image, respectively, p (x, z) represents a conditional distribution, and q (z | x) represents a posterior distribution.

In this embodiment, the ratio between the conditional distribution and the posterior distribution is calculated first, and then the mean of the logarithm of the ratio is taken as a loss function in the model training process, so that the smaller the value obtained by the loss function is, the smaller the difference between the conditional distribution and the posterior distribution is, that is, the more and more similar the generated derivative image and the image to be processed is, thereby improving the accuracy of the model.

In step S550, the parameters in the encoder and the parameters in the decoder are updated based on the loss values, and an encoder for encoding and a decoder for decoding are generated.

In this embodiment, iterative training is performed on sample data based on the loss function, and during each iteration, the image is transmitted to an encoder to obtain a set of mean and logarithmic variance parameters corresponding to an approximate posterior q (z | x); then sampling from q (z | x) by applying a reparameterization mode; finally, the re-parameterized samples are passed to the decoder to obtain the corresponding conditional distribution p (x | z). And then updating the parameters in the encoder and the parameters in the decoder based on the loss values obtained by the posterior probability and the conditional distribution to generate the encoder for encoding and the decoder for decoding.

In an embodiment of the present application, encoding image features of an image to be processed based on a deconvolution network, and generating a posterior distribution corresponding to a derivative image of the image to be processed includes the following steps S1210 to S1220, which are described in detail as follows:

in step S1210, potential information including image features is extracted from the image to be processed.

The potential information of the image to be processed in this embodiment includes a spectrogram, a feature map, and the like of the image, where the spectrum analysis processing may be performed on the image to be processed to obtain spectrum information of the image, or the feature extraction may be performed on the image to be processed to obtain feature information of the image.

Step S1220, encoding the potential information based on the deconvolution network, and generating a posterior distribution corresponding to the derivative image of the image to be processed.

In an embodiment of the application, after the potential information corresponding to the image features is obtained, the potential information is encoded based on a pre-trained encoder composed of a deconvolution network, and a posterior distribution corresponding to a derivative image of the image to be processed is generated.

In an embodiment of the present application, as shown in fig. 12, the process of encoding the potential information based on the deconvolution network in step S1220 and generating the posterior distribution corresponding to the derived image of the image to be processed includes the following steps S1221 to S1223, which are described in detail as follows:

step S1221, in the deconvolution network, fully connecting the potential information to generate a first characteristic;

step S1222, resampling the first feature to generate a second feature;

and S1223, performing deconvolution-based coding on the second features to generate posterior distribution corresponding to the derivative image of the image to be processed.

In an embodiment of the present application, based on a model structure of an encoder, in this embodiment, full connection is performed on potential information to generate a first feature, and in a third resampling layer of the encoder, an input feature of a second layer is reconstructed to generate a second feature; then in the fourth layer, 64 convolution kernels are used, the size of the convolution kernels is 3x3, the step size is 2, the inner distance is SAME, and the activation function is relu; the fifth layer is an deconvolution layer, 32 convolution kernels are used, the size of each convolution kernel is 3x3, the step length is 2, the inner distance is SAME, and the activation function is relu; the sixth layer is a deconvolution layer, 16 convolution kernels are used, the size of the convolution kernels is 3x3, the step length is 2, the inner distance is SAME, and the activation function is relu; the seventh layer is an deconvolution layer, 1 convolution kernel is used, the size of the convolution kernel is 3x3, the step length is 1, the inner distance is SAME, the activation function is a linear function f (x), and posterior distribution corresponding to the derivative image of the image to be processed is generated.

In an embodiment of the present application, the process of decoding the posterior distribution based on the convolutional network in step S320 to generate the conditional distribution corresponding to the posterior distribution includes steps S231 to S232, which are described in detail as follows:

in step S231, the posterior distribution is sampled to obtain sampling information.

In this embodiment, the posterior distribution may be sampled in a reparameterization manner to obtain sampling information. The sampling information in this embodiment is used to represent the image feature distribution in the posterior distribution.

In an embodiment of the present application, as shown in fig. 13, the step S232 inputs the sampling information into a decoder formed by a convolutional network for decoding, and generates a conditional distribution process corresponding to the posterior distribution, which includes steps S2321 to S2322, and the following is described in detail:

in step S2321, the convolution layer in which the sampling information is input into the decoder is subjected to convolution processing, and a third feature is generated; wherein the decoder comprises at least two convolutional layers;

in step S2322, the full-link layer in the third feature input decoder is mapped to generate a conditional distribution corresponding to the posterior distribution.

In an embodiment of the present application, based on the convolutional layer and the full link layer included in the decoder, in this embodiment, the convolutional layer in which the sampling information is input into the decoder is first subjected to convolutional processing to generate a third feature; wherein the decoder comprises at least two convolutional layers. Illustratively, at the second layer in the decoder, 16 convolution kernels are used, the convolution kernel size is 3x3, the step size is 2, the inner distance is SAME, and the activation function is relu; the third layer in the decoder uses 32 convolution kernels, the size of the convolution kernels is 3x3, the step size is 2, the inner distance is SAME, and the activation function is relu; the fourth layer is a convolution layer, 64 convolution kernels are used, the size of each convolution kernel is 3x3, the step length is 2, the inner distance is SAME, and the activation function is relu; and finally, inputting the third characteristic into a fifth full-connection layer of the decoder to perform mapping processing, and generating a condition distribution corresponding to the posterior distribution.

In addition, the process of generating a derivative image associated with the image to be processed based on the image to be processed in step S320 and determining a first security factor based on the degree of association between the derivative image and the image to be processed specifically includes steps S380 to S390, which are described in detail as follows:

in step S380, the generator performs processing based on full-join and deconvolution on the image to be processed, and generates a derivative image corresponding to the image to be processed.

In an embodiment of the present application, a generator is preset in this embodiment, where the generator is configured by a full-connection layer and a deconvolution layer, and is used to perform processing based on full-connection and deconvolution on an image to be processed by the generator to generate a derivative image corresponding to the image to be processed. In this embodiment, full connection processing is performed on image features in an image to be processed, deconvolution processing is performed on a processing result obtained by full connection, and finally a derivative image corresponding to the image to be processed is generated.

In an embodiment of the present application, the image to be processed in this embodiment may be a general environment image, a group photo, a self-portrait, or the like, and may also be a face image, for example, a face brushing image during face brushing payment.

In one embodiment of the present application, the number of deconvolution may be one, two, or at least two, etc. Through multiple deconvolution processes, a more accurate derivative image can be obtained.

In step S390, the derived image and the to-be-processed image are compared by the discriminator to generate a first security factor corresponding to the similarity distribution between the derived image and the to-be-processed image.

As shown in fig. 14, in the present embodiment, after the to-be-processed image is generated by the generator, the derived image and the to-be-processed image are compared by the discriminator, that is, the similarity between each pixel in the derived image and the to-be-processed image is calculated, and then the first security factor is generated based on the similarity distribution formed by the similarities corresponding to all the pixels.

In an embodiment of the present application, the similarity between the derived image and the image to be processed may be calculated by calculating cosine values between pixel values corresponding to respective pixels, and finally generating a similarity distribution. When the first security factor is generated based on the similarity distribution, the first security factor may be sequentially formed for each row based on the similarity distribution, and a rank, a norm, a feature value, or a feature vector corresponding to the similarity distribution may be used as the first security factor.

According to the image encryption method and device, the similarity distribution generated based on the generator and the discriminator is used as the first safety factor of the image to be processed, different first safety factors can be extracted based on different images to be processed, and therefore when the image to be processed is encrypted based on the safety factors, the safety and the privacy of the image to be processed can be improved.

In one embodiment of the present application, the method for image processing based on generation of a countermeasure network further includes the following steps S610 to S640, which are described in detail as follows:

step S610, a noisy image sample with noise and a clear target image sample are obtained.

In one embodiment of the present application, the sample acquisition includes acquisition of noisy images and acquisition of sharp images. Illustratively, when the image sample is face data, a certain amount of effective face data is collected, namely the image sample is clear and has good image quality and is used as target sample data; and simultaneously, collecting the face data with the same quantity and poor quality as the noise sample data.

And step S620, cleaning the noisy image sample and the target image sample to obtain the cleaned noisy image sample and the cleaned target image sample.

In an embodiment of the present application, after the noisy image sample and the target image sample are acquired, the noisy image sample and the target image sample are cleaned to clear an invalid image in the sample, where the invalid image may be an image that is blocked and does not have a face region, and the like, and the noisy image sample and the target image sample after cleaning are obtained.

Step S630, training the set generator based on the cleaned noisy image sample and the target image sample to obtain the generator after training, where the generator is used to generate an image generation result corresponding to the noisy image sample.

In an embodiment of the present application, a generator and a determiner are preset in this embodiment, and after the noisy image sample and the target image sample after cleaning are obtained, the set generator is trained based on these samples, so that an image generation result corresponding to the noisy image sample is generated by the trained generator.

In the embodiment, the process of training the obtained generator and the determiner includes the processes of collecting data, feature engineering, model selection, model training and model evaluation.

In step S630, training the set generator based on the cleaned noisy image sample and the target image sample to obtain the generator after training, specifically including steps S631 to S633, which are described in detail as follows:

step S631, inputting the cleaned noisy image sample into a generator formed by full connection and deconvolution operation, and outputting an image generation result;

step S632, determining a loss value of the current training based on the cross entropy loss function and the image generation result;

step S633, updating the parameters in the generator based on the preset optimizer and the loss value, and generating the generator after training.

As shown in fig. 15, the generator in the present embodiment is composed of a fully connected layer and a deconvolution layer.

The generator in the embodiment of the present application may be configured based on a fully connected layer and a convolution transpose layer that is followed by a preset number of convolution layers, that is, a deconvolution layer. Illustratively, the generator in the present embodiment may include the following structure: the first layer is a full connection layer, and the start function uses a relu activation function; the second layer is a batch standardization layer; the third layer uses leakage correction linear units as activation functions; the fourth layer is an deconvolution layer, 128 convolution kernels are used, the size of the convolution kernels is 5x5, the step length is 1, the inner distance is set to SAME, the start function is a relu activation function, and no bias is used; the fifth layer is an deconvolution layer, 64 convolution kernels are used, the size of the convolution kernels is 5x5, the step length is 2, the inner distance is SAME, the start function is relu activation function, and no bias is used; the sixth layer is an deconvolution layer, 1 convolution kernel is used, the size of the convolution kernel is 5x5, the step length is 2, the inner distance is SAME, the start function is tanh hyperbolic tangent function, and no offset is used.

Based on the above-mentioned generator structure, in the embodiment of the present application, the noisy image sample is input into the generator, and the image generation result is output. Then, determining a loss value of the training based on a cross entropy loss function and an image generation result; and updating parameters in the generator based on a preset optimizer and the loss value, and generating the generator after training so as to generate a derivative image corresponding to the image to be processed through the generator.

And step S640, training a set discriminator based on the image generation result output by the generator and the cleaned target image sample to obtain the trained discriminator, wherein the discriminator is used for discriminating the image generation result and the target image corresponding to the image generation result and outputting a discrimination result indicating whether the image generation result and the target image sample are consistent.

In the embodiment of the application, a discriminator is preset and used for comparing the image generation result of the data of the generator with the corresponding target image sample so as to judge the accuracy of the image generation result. The training process of the discriminator in this embodiment is as follows:

in step S640, training the set discriminator based on the image generation result output from the generator and the target image sample after cleaning to obtain the discriminator after training, the method includes steps S541 to S543 as follows:

in step S541, inputting the image generation result output by the generator and the cleaned target image sample into a discriminator constructed based on a convolutional neural network, and outputting a discrimination result;

in step S542, determining a loss value of the current training based on a loss function generated by the cross entropy and a discrimination result;

in step S543, parameters in the discriminator are updated based on the preset optimizer and the loss value, and the discriminator after training is generated.

As shown in fig. 16, the discriminator in this embodiment is a convolutional network based picture classifier, which may include a preset number of convolutional layers and a preset number of fully-connected network layers.

For example, in the discriminator of the present application, the first layer is an input layer; the second layer is a convolution layer, 64 convolution kernels are used, the size of each convolution kernel is 5x5, the step length is 2, the inner distance is SAME, and the start function is a relu activation function; the third layer is a leakage correction linear unit as an activation function; the fourth layer is dropout to prevent overfitting; the fifth layer is a convolution layer, 128 convolution kernels are used, the size of each convolution kernel is 5x5, the step size is 2, the inner distance is set to be SAME, and the start function is a relu activation function; the sixth layer uses an activation function; the seventh layer uses a fully connected layer.

The discriminator obtained by training in the embodiment of the application is used for judging the authenticity of the picture. The model will be trained to output a positive value for a real picture and a negative value for a forged picture. Specifically, a loss value of the training is determined based on a loss function generated by cross entropy and a discrimination result in the training process; and updating parameters in the discriminator based on a preset optimizer and the loss value to generate the discriminator after training.

In an embodiment of the present application, the process of performing full-join and deconvolution-based processing on the image to be processed in the generator in step S380 to generate a derivative image corresponding to the image to be processed includes steps S381 to S383, which are described in detail as follows:

step S381, extracting image features from the image to be processed;

in an embodiment of the present application, when processing an image to be processed, image features in the image to be processed are processed, and therefore, in this embodiment, image features are extracted from the image to be processed first. The image in this embodiment may be a pixel feature, a vector matrix feature, or the like of the image.

The process of extracting the image features from the image to be processed in step S381 specifically includes steps S3811 to S3813, which are described in detail as follows:

step S3811, converting the image to be processed into a gray image;

step S3812, determining an image data set corresponding to the gray level image based on the Bernoulli distribution corresponding to the gray level image;

step S3813 is to perform normalization processing on the image data set to generate an image feature.

In one embodiment of the present application, meaningful features and attributes are acquired, processed, and extracted from the image to be processed using signal data processing techniques. Specifically, an image to be processed is converted into a gray image, then modeling is carried out on each pixel by using Bernoulli distribution, and a data set is standardized into an interval < -1, 1 >, so that image characteristics corresponding to the image to be processed are obtained.

In step S382, the generator performs full-join processing on the image features to generate full-join image features.

In an embodiment of the present application, based on the fully-connected layer and the deconvolution layer included in the generator described in the above embodiment, the generator in the embodiment of the present application performs fully-connected processing on the image features of the image to be processed, and generates fully-connected image features.

Step S382 is a process of performing full-connection processing on the image feature in the generator to generate a full-connection image feature, and includes steps S3821 to S3822:

in step S3821, performing normalization processing on the full-link image feature to generate a normalized image feature;

in step S3822, mapping the normalized image features based on a preset leakage correction linear unit to obtain the corrected fully-connected image features.

After the fully-connected image features are obtained, the fully-connected image features are subjected to standardization processing to generate standardized image features. And then mapping the standardized image features based on a preset leakage correction linear unit to obtain the corrected fully-connected image features. In the embodiment, the obtained fully-connected image features are more accurate and comprehensive through the method, and the comprehensiveness of the features in the image to be processed is ensured.

And step S383, performing deconvolution processing on the characteristics of the full-connection image based on the set convolution kernel and the set step length to generate a derivative image corresponding to the image to be processed.

In an embodiment of the application, after the fully-connected image features are obtained, at least two times of deconvolution processing are performed on the fully-connected image features based on convolution kernels and step lengths corresponding to the deconvolution layers, and derivative images corresponding to images to be processed are generated.

For example, the process of deconvolution processing in the present embodiment may include three deconvolution processes. For example, in the first deconvolution processing, 128 convolution kernels are used, the size of the convolution kernel is 5 × 5, the step size is 1, the inner distance is set to SAME, the start function is relu activation function, and no offset is used; during the second deconvolution processing, 64 convolution kernels are used, the size of each convolution kernel is 5x5, the step length is 2, the inner distance is SAME, the start function is relu activation function, and no bias is used; in the third deconvolution process, 1 convolution kernel is used, the size of the convolution kernel is 5x5, the step size is 2, the inner distance is SAME, the start function is a tanh hyperbolic tangent function, and no offset is used.

In step S390, comparing the derived image with the image to be processed by the discriminator to generate a first security factor corresponding to the similarity distribution between the derived image and the image to be processed, and the process specifically includes:

in one embodiment of the present application, after the derivative image is generated, the derivative image and the image to be processed are compared by a discriminator to determine the authenticity of the generated extended image. Specifically, determining similarity distribution between the derived image and the image to be processed based on a convolutional neural network in the discriminator; and generating a first safety factor corresponding to the similarity distribution based on the similarity distribution, namely generating a countermeasure network safety factor by deep convolution.

The image to be processed comprises a face brushing image collected during face payment, and the method further comprises the following steps of S1410-S1420:

step 1410, after generating the encrypted information corresponding to the face brushing image, sending the encrypted information to the server so that the server verifies the transaction information of the user based on the encrypted information;

in step S1420, the deduction information sent by the server is displayed on the interface, and the deduction information is generated by the server based on the verification result of the transaction information.

As shown in fig. 17, in an embodiment of the present application, after the terminal device 1201 generates the encrypted information corresponding to the face brushing data based on the image security factor 1203, the encrypted information is sent to the server 1202, so that the transaction information of the user is verified by the server based on the encrypted information. And then, the server generates deduction information based on the verification result of the transaction information and sends the deduction information to the terminal equipment, so that the terminal displays the deduction information sent by the server on an interface.

Specifically, after the server side acquires the encrypted information, when the encrypted information is verified, the server side encodes and decodes the face feature information through account information, such as the face feature information, input when the payment account is registered for the first time, and generates verification information corresponding to the face feature information. And then comparing the encrypted information with the verification information, if the encrypted information is consistent with the verification information, the transaction information of the user is verified, and the next deduction processing is continued.

As shown in fig. 18, after the terminal device acquires the deduction information transmitted by the server, collection information, such as the name of the payee, the collection amount, and the like, is displayed.

As shown in fig. 19, after the terminal receives the information of the face brushing payment failure, the face brushing payment failure is displayed in the interface, and the following touch operation options are displayed, such as face brushing again, password payment or payment cancellation, and the like.

In this embodiment, after the encrypted information corresponding to the face brushing image is generated, the encrypted information is sent to the server, and the encrypted information is verified by the server. When verification passing information returned by the server is not acquired, the face brushing image of the user is continuously acquired, new encryption information is generated based on the face brushing image and is sent to the server until the verification passing information returned by the server is acquired.

Embodiments of the apparatus of the present application are described below, which may be used to perform the image encryption methods of the above-described embodiments of the present application. It will be appreciated that the apparatus may be a computer program (comprising program code) running on a computer device, for example an application software; the apparatus may be used to perform the corresponding steps in the methods provided by the embodiments of the present application. For details that are not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the image encryption method described above in the present application.

Fig. 20 shows a block diagram of an image encryption apparatus according to an embodiment of the present application.

Referring to fig. 20, an image encryption apparatus 2000 according to an embodiment of the present application includes: an acquiring unit 2010 configured to acquire an image to be processed to be encrypted; a derivation unit 2020, configured to generate a derived image associated with the image to be processed based on the image to be processed, and determine a first security factor based on a degree of association between the derived image and the image to be processed; a factor unit 2030, configured to generate an image security factor based on the first security factor and a second security factor generated based on an encryption parameter of the image to be processed; the encrypting unit 2040 is configured to encrypt the to-be-processed image based on the image security factor, and generate encrypted information corresponding to the to-be-processed image.

In some embodiments of the present application, based on the foregoing scheme, the derivation unit 2020 comprises: the posterior unit is used for coding the image characteristics of the image to be processed based on a deconvolution network to generate posterior distribution corresponding to a derivative image of the image to be processed; the condition unit is used for decoding the posterior distribution based on a convolutional network to generate a condition distribution corresponding to the posterior distribution, wherein the condition distribution is used for representing the correlation degree between the posterior distribution and the image to be processed; a first safety factor unit to determine the first safety factor based on the conditional distribution.

In some embodiments of the present application, based on the foregoing scheme, the posterior unit includes: the potential information unit is used for extracting potential information containing image features from the image to be processed; and the coding unit is used for coding the potential information based on the deconvolution network to generate the posterior distribution corresponding to the derivative image.

In some embodiments of the present application, based on the foregoing scheme, the encoding unit is configured to: fully connecting the potential information in the deconvolution network to generate a first feature; resampling the first features to generate second features; and carrying out deconvolution-based coding on the second features to generate a posterior distribution corresponding to the derivative image.

In some embodiments of the present application, based on the foregoing scheme, the condition unit includes: the sampling unit is used for sampling the posterior distribution to obtain sampling information; and the decoding unit is used for inputting the sampling information into a decoder formed by a convolutional network for decoding and generating the condition distribution corresponding to the posterior distribution.

In some embodiments of the present application, based on the foregoing scheme, the sampling unit is configured to: and sampling the posterior distribution based on a reparameterization mode to obtain sampling information.

In some embodiments of the present application, based on the foregoing scheme, the decoding unit is configured to: inputting the sampling information into a convolutional layer in the decoder for convolution processing to generate a third characteristic; wherein the decoder comprises at least two convolutional layers; and inputting the third characteristic into a full-connection layer in the decoder for mapping processing to generate a condition distribution corresponding to the posterior distribution.

In some embodiments of the present application, based on the foregoing scheme, the derivation unit 2020 comprises: the device comprises a first generation unit, a second generation unit and a third generation unit, wherein the first generation unit is used for carrying out processing based on full connection and deconvolution on an image to be processed in a generator to generate a derivative image corresponding to the image to be processed; and the second generation unit is used for comparing the derivative image with the image to be processed through a discriminator to generate a first safety factor corresponding to the similarity distribution between the derivative image and the image to be processed.

In some embodiments of the present application, based on the foregoing scheme, the first generating unit includes: the extraction unit is used for extracting image features from the image to be processed; a third generating unit, configured to perform full-join processing on the image features in a generator to generate full-join image features; and the fourth generation unit is used for carrying out deconvolution processing on the fully-connected image characteristics based on the set convolution kernel and the set step length to generate a derivative image corresponding to the image to be processed.

In some embodiments of the present application, based on the foregoing scheme, the image-based encryption apparatus 2000 is further configured to: carrying out standardization processing on the fully-connected image features to generate standardized image features; and mapping the standardized image features based on a preset leakage correction linear unit to obtain the corrected fully-connected image features.

In some embodiments of the application, based on the foregoing scheme, the fourth generating unit is configured to perform deconvolution processing on the fully-connected image features at least twice based on convolution kernels and step lengths corresponding to respective deconvolution layers, and generate a derivative image corresponding to the image to be processed.

In some embodiments of the present application, based on the foregoing scheme, the extraction unit is configured to: converting the image to be processed into a gray image; determining an image data set corresponding to the gray level image based on the Bernoulli distribution corresponding to the gray level image; and carrying out standardization processing on the image data set to generate the image characteristics.

In some embodiments of the present application, based on the foregoing scheme, the second generating unit is configured to: determining similarity distribution between the derived image and the image to be processed based on a convolutional neural network in the discriminator; and generating a first safety factor corresponding to the similarity distribution based on the similarity distribution.

In some embodiments of the present application, based on the foregoing scheme, the encryption parameter corresponding to the image to be processed includes at least one of: acquiring an acquisition equipment serial number of the image to be processed, a signature version used for signing the image to be processed, a timestamp, a counter and a random character string corresponding to the encrypted information corresponding to the image to be processed; the image encryption apparatus 2000 is further configured to: and splicing the encryption parameters corresponding to the images to be processed based on the set splicing rule to generate the second safety factor.

In some embodiments of the present application, the image encryption apparatus 2000 is further configured to: acquiring an image sample to be trained; inputting image features contained in the image samples into an encoder constructed based on a deconvolution network, and generating posterior distribution containing the image sample features; inputting the posterior distribution into a decoder constructed based on a convolution network to generate a condition distribution corresponding to the posterior distribution; calculating a loss value corresponding to the training based on the condition distribution, the posterior distribution and a preset loss function; and updating parameters in the encoder and parameters in the decoder based on the loss values, and generating the encoder for encoding and the decoder for decoding.

In some embodiments of the present application, the image encryption apparatus 2000 is further configured to: converting the image sample into a gray level image; determining an image data set corresponding to the gray level image based on the Bernoulli distribution corresponding to the gray level image; and carrying out binarization processing on the image data set to generate the image characteristics.

In some embodiments of the present application, based on the foregoing scheme, the encryption unit 2040 is configured to: and based on the image security factor, encrypting the image to be processed through an asymmetric encryption algorithm to generate encrypted information corresponding to the image to be processed.

In some embodiments of the present application, the image encryption apparatus 2000 is further configured to: after generating the encrypted information corresponding to the face brushing image, sending the encrypted information to a server so that the server verifies the transaction information of the user based on the encrypted information; and displaying deduction information sent by a server on an interface, wherein the deduction information is information generated by the server based on a verification result of the transaction information.

In some embodiments of the present application, the image encryption apparatus 2000 is further configured to: when verification passing information returned by the server is not acquired, the face brushing image of the user is continuously acquired, new encryption information is generated based on the face brushing image and is sent to the server until the verification passing information returned by the server is acquired.

FIG. 21 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.

It should be noted that the computer system 2100 of the electronic device shown in fig. 21 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiments.

As shown in fig. 21, the computer system 2100 includes a Central Processing Unit (CPU)2101, which can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 2102 or a program loaded from a storage portion 2108 into a Random Access Memory (RAM) 2103. In the RAM 2103, various programs and data necessary for system operation are also stored. The CPU 2101, ROM 2102 and RAM 2103 are connected to each other via a bus 2104. An Input/Output (I/O) interface 2105 is also connected to bus 2104.

The following components are connected to the I/O interface 2105: an input portion 2106 including a keyboard, a mouse, and the like; an output section 2107 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage portion 2108 including a hard disk and the like; and a communication section 2109 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 2109 performs communication processing via a network such as the internet. The driver 2110 is also connected to the I/O interface 2105 as necessary. A removable medium 2111 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 2110 as necessary, so that a computer program read out therefrom is mounted in the storage portion 2108 as necessary.

In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 2109, and/or installed from the removable medium 2111. When the computer program is executed by a Central Processing Unit (CPU)2101, various functions defined in the system of the present application are executed.

It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with a computer program embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.

According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations described above.

As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.

It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.

Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.

Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains.

It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

33页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于保实分数离散余弦变换的光学图像加密方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类