Certificate copy information identification method, server and storage medium

文档序号:1379186 发布日期:2020-08-14 浏览:11次 中文

阅读说明:本技术 证件复印件信息识别方法、服务器及存储介质 (Certificate copy information identification method, server and storage medium ) 是由 叶颖琦 蒋栋 李龙 李翔 杜晨冰 万正勇 沈志勇 高宏 于 2020-04-21 设计创作,主要内容包括:本发明公开了一种证件复印件信息识别方法,应用于服务器,该方法包括接收客户端上传的第一图像,将第一图像进行倾斜矫正得到第二图像,对第二图像进行切割得到至少一个第三图像,将每个第三图像分别输入张量提取模型输出对应的字段张量,对每个第一字段对应的字段张量进行解析得到对应的第二字段作为第一图像中各字段的识别结果,判断识别出的第一字段在第二图像中是否被水印遮挡,若是则从第二图像中找到与该第一字段属性一致的校验字段,并获取预设校验字段对应的第二字段,计算被水印遮挡的第一字段对应的第二字段与预设校验字段对应的第二字段是否相同,若是则说明被水印遮挡的第一字段识别结果正确。本发明能够提高复印件信息提取的准确性。(The invention discloses a method for identifying information of a certificate copy, which is applied to a server and comprises the steps of receiving a first image uploaded by a client, carrying out inclination correction on the first image to obtain a second image, cutting the second image to obtain at least one third image, respectively inputting each third image into a tensor extraction model to output a corresponding field tensor, analyzing the field tensor corresponding to each first field to obtain a corresponding second field as an identification result of each field in the first image, judging whether the identified first field is shielded by a watermark in the second image, if so, finding a check field with the same attribute as the first field from the second image, obtaining a second field corresponding to a preset check field, calculating whether the second field corresponding to the first field shielded by the watermark is the same as the second field corresponding to the preset check field, if yes, the identification result of the first field which is blocked by the watermark is correct. The invention can improve the accuracy of copy information extraction.)

1. A certificate copy information identification method is applied to a server and is characterized by comprising the following steps:

a receiving step: receiving a first image containing a certificate copy uploaded by a client, and performing tilt correction on the first image according to a predetermined correction rule to obtain a second image;

the processing steps are as follows: cutting the second image to obtain at least one third image containing a first field, inputting the third image into a pre-trained tensor extraction model, and outputting a field tensor corresponding to the first field;

and (3) analyzing: analyzing the field tensor to obtain a second field, and taking the second field as an identification result of the first field in the first image; and

a checking step: judging whether the first field is shielded by the watermark in the second image according to a preset watermark identification rule, if so, searching a preset check field with the attribute consistent with that of the first field from the second image, acquiring a second field corresponding to the preset check field, judging whether the second field corresponding to the first field shielded by the watermark is the same as the second field corresponding to the preset check field, and if so, judging that the identification result of the first field shielded by the watermark is correct.

2. The method of identifying credential copy information as in claim 1, wherein the rectification rules comprise:

acquiring a first straight line segment with the length less than or equal to a first preset length in the first image;

determining all second straight line segments with inclination angles smaller than or equal to a first preset angle from the first straight line segments;

respectively calculating the difference value of the y coordinates of the center points of every two second straight line segments, and classifying the second straight line segments corresponding to the difference value smaller than or equal to a preset threshold value into one class;

fitting each type of second straight line segment by using a least square method to obtain a fitting straight line corresponding to each type of second straight line segment; and

and calculating the slope of each fitting straight line, and the median of all the slopes and the mean of the slopes, determining the smaller of the median and the mean as the slope of an inclined line segment in the first image, and adjusting the inclination angle of the first image according to the determined slope.

3. The method for identifying information on a document copy of claim 1, wherein the cutting of the second image is by a preset field cutting algorithm, the field cutting algorithm comprising:

creating a mapping relation between a preset field template image and a cutting frame in a database in advance;

respectively inputting the second image and a preset field template image into a sliding window model, and taking the preset field template image as a sliding window, wherein the size of the sliding window is consistent with that of the preset field template image;

traversing and searching the position of the first field in the second image by using the sliding window, calculating the similarity value between the preset field template image and the image area covered by the sliding window on the second image, and marking the image area corresponding to the image area with the maximum similarity value as a marked area; and

and finding a cutting frame corresponding to the preset field template image from the database according to the mapping relation, shifting the cutting frame to the right by a preset size relative to a marking area to serve as an area where the first field is located, and taking an image obtained by cutting the area where the first field is located as the third image.

4. The method for recognizing the information of the document copy according to claim 1, wherein the inputting the third image into a pre-trained tensor extraction model to output a field tensor corresponding to the first field comprises:

extracting a first feature vector of the third image by using a convolution layer and a pooling layer of the tensor extraction model;

extracting a second feature vector of the third image based on the first feature vector and a DenseNet structure and a full connection layer of the tensor extraction model; and

and uniformly dividing the second eigenvector into w parts, taking each divided eigenvector as a line tensor, and sequentially stacking each line tensor to obtain a probability distribution matrix n w as the field tensor, wherein n represents the category number of the first field character, and w represents the maximum length of the first field character.

5. The method for identifying information on a document copy of claim 4, wherein the analyzing step of analyzing the field tensor to obtain the second field utilizes a tensor analysis algorithm, and the tensor analysis algorithm includes:

traversing the probability value of each column in the probability distribution matrix corresponding to the field tensor, determining the position of the maximum value of the probability value in each column, and taking the character corresponding to the position as the field code corresponding to the column;

repeating the operation until the field tensor is completely analyzed to obtain the second field; and

and taking the second field as the identification result of the first field in the first image.

6. The method of identifying credential copy information as in claim 1, wherein the watermark identification rules comprise:

inputting the second image and a pre-created watermark template image into a sliding window model respectively, and taking the watermark template image as a sliding window, wherein the size of the sliding window is consistent with that of the watermark template image; and

and traversing and searching the position of the watermark in the second image through the sliding window, calculating the similarity value between the watermark template image and the image area covered by the sliding window on the second image, and selecting the image area corresponding to the image area with the maximum similarity value as the area shielded by the watermark, namely the first field shielded by the watermark.

7. A server, comprising a memory and a processor, the memory having stored thereon a credential copy information identification program that when executed by the processor performs the steps of:

a receiving step: receiving a first image containing a certificate copy uploaded by a client, and performing tilt correction on the first image according to a predetermined correction rule to obtain a second image;

the processing steps are as follows: cutting the second image to obtain at least one third image containing a first field, inputting the third image into a pre-trained tensor extraction model, and outputting a field tensor corresponding to the first field;

and (3) analyzing: analyzing the field tensor to obtain a second field, and taking the second field as an identification result of the first field in the first image; and

a checking step: judging whether the first field is shielded by the watermark in the second image according to a preset watermark identification rule, if so, searching a preset check field with the attribute consistent with that of the first field from the second image, acquiring a second field corresponding to the preset check field, judging whether the second field corresponding to the first field shielded by the watermark is the same as the second field corresponding to the preset check field, and if so, judging that the identification result of the first field shielded by the watermark is correct.

8. The server of claim 7, wherein the remediation rules comprise:

acquiring a first straight line segment with the length less than or equal to a first preset length in the first image;

determining all second straight line segments with inclination angles smaller than or equal to a first preset angle from the first straight line segments;

respectively calculating the difference value of the y coordinates of the center points of every two second straight line segments, and classifying the second straight line segments corresponding to the difference value smaller than or equal to a preset threshold value into one class;

fitting each type of second straight line segment by using a least square method to obtain a fitting straight line corresponding to each type of second straight line segment; and

and calculating the slope of each fitting straight line, and the median of all the slopes and the mean of the slopes, determining the smaller of the median and the mean as the slope of an inclined line segment in the first image, and adjusting the inclination angle of the first image according to the determined slope.

9. The server of claim 8, wherein the cutting of the second image is with a preset field cutting algorithm comprising:

creating a mapping relation between a preset field template image and a cutting frame in a database in advance;

respectively inputting the second image and a preset field template image into a sliding window model, and taking the preset field template image as a sliding window, wherein the size of the sliding window is consistent with that of the preset field template image;

traversing and searching the position of the first field in the second image by using the sliding window, calculating the similarity value between the preset field template image and the image area covered by the sliding window on the second image, and marking the image area corresponding to the image area with the maximum similarity value as a marked area; and

and finding a cutting frame corresponding to the preset field template image from the database according to the mapping relation, shifting the cutting frame to the right by a preset size relative to a marking area to serve as an area where the first field is located, and taking an image obtained by cutting the area where the first field is located as the third image.

10. A computer-readable storage medium having stored thereon a credential copy information identification program executable by one or more processors to implement the steps of the credential copy information identification method as claimed in any one of claims 1 to 6.

Technical Field

The invention relates to the technical field of data processing, in particular to a certificate copy information identification method, a server and a storage medium.

Background

OCR recognition techniques are commonly used to extract credential-related information, such as document information in document archives, history archives, and the like. However, when the object to be extracted is a document copy, the resolution is low due to the blur of the document copy compared to the original of the identification card, and the document copy often carries a watermark, so that the OCR recognition accuracy is not ideal. Therefore, how to improve the accuracy of copy information extraction becomes a technical problem which needs to be solved urgently.

Disclosure of Invention

The invention mainly aims to provide a certificate copy information identification method, a server and a storage medium, aiming at solving the problem of how to improve the accuracy of copy information extraction.

In order to achieve the above object, the present invention provides a method for identifying information of a document copy, which is applied to a server, and the method comprises:

a receiving step: receiving a first image containing a certificate copy uploaded by a client, and performing tilt correction on the first image according to a predetermined correction rule to obtain a second image;

the processing steps are as follows: cutting the second image to obtain at least one third image containing a first field, inputting the third image into a pre-trained tensor extraction model, and outputting a field tensor corresponding to the first field;

and (3) analyzing: analyzing the field tensor to obtain a second field, and taking the second field as an identification result of the first field in the first image; and

a checking step: judging whether the first field is shielded by the watermark in the second image according to a preset watermark identification rule, if so, searching a preset check field with the attribute consistent with that of the first field from the second image, acquiring a second field corresponding to the preset check field, judging whether the second field corresponding to the first field shielded by the watermark is the same as the second field corresponding to the preset check field, and if so, judging that the identification result of the first field shielded by the watermark is correct.

Preferably, the corrective rule comprises:

acquiring a first straight line segment with the length less than or equal to a first preset length in the first image;

determining all second straight line segments with inclination angles smaller than or equal to a first preset angle from the first straight line segments;

respectively calculating the difference value of the y coordinates of the center points of every two second straight line segments, and classifying the second straight line segments corresponding to the difference value smaller than or equal to a preset threshold value into one class;

fitting each type of second straight line segment by using a least square method to obtain a fitting straight line corresponding to each type of second straight line segment; and

and calculating the slope of each fitting straight line, and the median of all the slopes and the mean of the slopes, determining the smaller of the median and the mean as the slope of an inclined line segment in the first image, and adjusting the inclination angle of the first image according to the determined slope.

Preferably, the cutting the second image is performed by using a preset field cutting algorithm, and the field cutting algorithm includes:

creating a mapping relation between a preset field template image and a cutting frame in a database in advance;

respectively inputting the second image and a preset field template image into a sliding window model, and taking the preset field template image as a sliding window, wherein the size of the sliding window is consistent with that of the preset field template image;

traversing and searching the position of the first field in the second image by using the sliding window, calculating the similarity value between the preset field template image and the image area covered by the sliding window on the second image, and marking the image area corresponding to the image area with the maximum similarity value as a marked area; and

and finding a cutting frame corresponding to the preset field template image from the database according to the mapping relation, shifting the cutting frame to the right by a preset size relative to a marking area to serve as an area where the first field is located, and taking an image obtained by cutting the area where the first field is located as the third image.

Preferably, the inputting the third image into a pre-trained tensor extraction model to output a field tensor corresponding to the first field includes:

extracting a first feature vector of the third image by using a convolution layer and a pooling layer of the tensor extraction model;

extracting a second feature vector of the third image based on the first feature vector and a DenseNet structure and a full connection layer of the tensor extraction model; and

and uniformly dividing the second eigenvector into w parts, taking each divided eigenvector as a line tensor, and sequentially stacking each line tensor to obtain a probability distribution matrix n w as the field tensor, wherein n represents the category number of the first field character, and w represents the maximum length of the first field character.

Preferably, the analyzing the field tensor to obtain the second field in the analyzing step utilizes a tensor analysis algorithm, and the tensor analysis algorithm includes:

traversing the probability value of each column in the probability distribution matrix corresponding to the field tensor, determining the position of the maximum value of the probability value in each column, and taking the character corresponding to the position as the field code corresponding to the column;

repeating the operation until the field tensor is completely analyzed to obtain the second field; and

and taking the second field as the identification result of the first field in the first image.

Preferably, the watermark identification rule includes:

inputting the second image and a pre-created watermark template image into a sliding window model respectively, and taking the watermark template image as a sliding window, wherein the size of the sliding window is consistent with that of the watermark template image; and

and traversing and searching the position of the watermark in the second image through the sliding window, calculating the similarity value between the watermark template image and the image area covered by the sliding window on the second image, and selecting the image area corresponding to the image area with the maximum similarity value as the area shielded by the watermark, namely the first field shielded by the watermark.

In order to achieve the above object, the present invention further provides a server, including a memory and a processor, where the memory stores a certificate copy information identification program, and the certificate copy information identification program, when executed by the processor, implements the following steps:

a receiving step: receiving a first image containing a certificate copy uploaded by a client, and performing tilt correction on the first image according to a predetermined correction rule to obtain a second image;

the processing steps are as follows: cutting the second image to obtain at least one third image containing a first field, inputting the third image into a pre-trained tensor extraction model, and outputting a field tensor corresponding to the first field;

and (3) analyzing: analyzing the field tensor to obtain a second field, and taking the second field as an identification result of the first field in the first image; and

a checking step: judging whether the first field is shielded by the watermark in the second image according to a preset watermark identification rule, if so, searching a preset check field with the attribute consistent with that of the first field from the second image, acquiring a second field corresponding to the preset check field, judging whether the second field corresponding to the first field shielded by the watermark is the same as the second field corresponding to the preset check field, and if so, judging that the identification result of the first field shielded by the watermark is correct.

Preferably, the corrective rule comprises:

acquiring a first straight line segment with the length less than or equal to a first preset length in the first image;

determining all second straight line segments with inclination angles smaller than or equal to a first preset angle from the first straight line segments;

respectively calculating the difference value of the y coordinates of the center points of every two second straight line segments, and classifying the second straight line segments corresponding to the difference value smaller than or equal to a preset threshold value into one class;

fitting each type of second straight line segment by using a least square method to obtain a fitting straight line corresponding to each type of second straight line segment; and

and calculating the slope of each fitting straight line, and the median of all the slopes and the mean of the slopes, determining the smaller of the median and the mean as the slope of an inclined line segment in the first image, and adjusting the inclination angle of the first image according to the determined slope.

Preferably, the cutting the second image is performed by using a preset field cutting algorithm, and the field cutting algorithm includes:

creating a mapping relation between a preset field template image and a cutting frame in a database in advance;

respectively inputting the second image and a preset field template image into a sliding window model, and taking the preset field template image as a sliding window, wherein the size of the sliding window is consistent with that of the preset field template image;

traversing and searching the position of the first field in the second image by using the sliding window, calculating the similarity value between the preset field template image and the image area covered by the sliding window on the second image, and marking the image area corresponding to the image area with the maximum similarity value as a marked area; and

and finding a cutting frame corresponding to the preset field template image from the database according to the mapping relation, shifting the cutting frame to the right by a preset size relative to a marking area to serve as an area where the first field is located, and taking an image obtained by cutting the area where the first field is located as the third image.

To achieve the above object, the present invention further provides a computer readable storage medium having stored thereon a credential copy information identification program executable by one or more processors to implement the steps of the credential copy information identification method as described above.

The certificate copy information identification method, the server and the storage medium provided by the invention receive the first image uploaded by the client, perform inclination correction on the first image to obtain the second image, cutting the second image to obtain at least one third image, inputting each third image into a tensor extraction model to output a corresponding field tensor, analyzing the field tensor corresponding to each first field to obtain a corresponding second field as an identification result of each field in the first image, judging whether the identified first field is shielded by the watermark in the second image, if so, finding a check field with the attribute consistent with that of the first field from the second image, and acquiring a second field corresponding to the preset check field, and calculating whether the second field corresponding to the first field shielded by the watermark is the same as the second field corresponding to the preset check field, wherein if the second field is the same as the second field corresponding to the preset check field, the identification result of the first field shielded by the watermark is correct. The invention can improve the accuracy of copy information extraction.

Drawings

FIG. 1 is a diagram of an application environment of a server according to a preferred embodiment of the present invention;

FIG. 2 is a schematic diagram of program modules of a preferred embodiment of the credential copy information identification process of FIG. 1;

FIG. 3 is a flowchart illustrating a method for identifying information of a document copy according to a preferred embodiment of the present invention.

The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.

Detailed Description

In order to make the objects, technical embodiments and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

It should be noted that the description relating to "first", "second", etc. in the present invention is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, the technical embodiments of the present invention may be combined with each other, but it must be based on the realization of those skilled in the art, and when the combination of the technical embodiments contradicts each other or cannot be realized, such combination of the technical embodiments should be considered to be absent and not within the protection scope of the present invention.

The invention provides a server 1.

The server 1 includes, but is not limited to, a memory 11, a processor 12, and a network interface 13.

The memory 11 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 11 may in some embodiments be an internal storage unit of the server 1, for example a hard disk of the server 1. The memory 11 may also be an external storage device of the server 1 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the server 1.

Further, the memory 11 may also include both an internal storage unit of the server 1 and an external storage device. The memory 11 can be used not only to store application software installed in the server 1 and various types of data such as a code of the certificate copy information recognition program 10, but also to temporarily store data that has been output or is to be output.

Processor 12, which in some embodiments may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip, operates on program code or processes data stored in memory 11, such as executing credential copy information identifier 10.

The network interface 13 may optionally comprise a standard wired interface, a wireless interface (e.g. WI-FI interface), typically used for establishing a communication connection between the server 1 and other electronic devices.

The client can be a desktop computer, a notebook, a tablet computer, a mobile phone, and the like.

The network may be the internet, a cloud network, a wireless fidelity (Wi-Fi) network, a Personal Area Network (PAN), a Local Area Network (LAN), and/or a Metropolitan Area Network (MAN). Various devices in the network environment may be configured to connect to the communication network according to various wired and wireless communication protocols. Examples of such wired and wireless communication protocols may include, but are not limited to, at least one of: transmission control protocol and internet protocol (TCP/IP), User Datagram Protocol (UDP), hypertext transfer protocol (HTTP), File Transfer Protocol (FTP), ZigBee, EDGE, IEEE 802.11, optical fidelity (Li-Fi), 802.16, IEEE 802.11s, IEEE 802.11g, multi-hop communications, wireless Access Points (APs), device-to-device communications, cellular communication protocol, and/or BlueTooth (BlueTooth) communication protocol, or a combination thereof.

Optionally, the server 1 may further comprise a user interface, the user interface may comprise a Display (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface may further comprise a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is used for displaying information processed in the server 1 and for displaying a visualized user interface.

While fig. 1 shows only a server 1 having components 11-13 and a credential copy information identification program 10, those skilled in the art will appreciate that the configuration shown in fig. 1 is not limiting of server 1 and may include fewer or more components than shown, or some components in combination, or a different arrangement of components.

In this embodiment, the credential copy information identification program 10 of fig. 1, when executed by the processor 12, implements the steps of:

a receiving step: receiving a first image containing a certificate copy uploaded by a client, and performing tilt correction on the first image according to a predetermined correction rule to obtain a second image;

the processing steps are as follows: cutting the second image to obtain at least one third image containing a first field, inputting the third image into a pre-trained tensor extraction model, and outputting a field tensor corresponding to the first field;

and (3) analyzing: analyzing the field tensor to obtain a second field, and taking the second field as an identification result of the first field in the first image; and

a checking step: judging whether the first field is shielded by the watermark in the second image according to a preset watermark identification rule, if so, searching a preset check field with the attribute consistent with that of the first field from the second image, acquiring a second field corresponding to the preset check field, judging whether the second field corresponding to the first field shielded by the watermark is the same as the second field corresponding to the preset check field, and if so, judging that the identification result of the first field shielded by the watermark is correct.

For a detailed description of the above steps, please refer to the following description of fig. 2 regarding a schematic diagram of program modules of an embodiment of the identification program 10 for identification of credential copy information and fig. 3 regarding a schematic diagram of a method flow of an embodiment of a method for identification of credential copy information.

Referring to FIG. 2, a schematic diagram of program modules of an embodiment of the credential copy information identification program 10 of FIG. 1 is shown. The certificate copy information recognition program 10 is divided into a plurality of modules, which are stored in the memory 11 and executed by the processor 12 to complete the present invention. The modules referred to herein are referred to as a series of computer program instruction segments capable of performing specified functions.

In this embodiment, the certificate copy information identification program 10 includes a receiving module 110, a processing module 120, an analyzing module 130, and a verifying module 140.

The receiving module 110 is configured to receive a first image containing a certificate copy uploaded by a client, and perform tilt correction on the first image according to a predetermined correction rule to obtain a second image.

In this embodiment, the first image may be an identity card image or other document image to be recognized, an original image captured by the client, for example, a camera, or a first image captured by another capturing terminal having a capturing function and a data transmission function, and there may be an inclination angle. If the first image with the inclination angle is directly input into the subsequent tensor extraction model, the accuracy of image identification may be reduced. Therefore, in the present embodiment, before tensor extraction is performed on the first image, the second image is obtained by performing tilt correction on the first image by a predetermined correction rule.

The corrective rule comprises:

acquiring a first straight line segment with a length smaller than or equal to a first preset length (for example, 0.3cm) in the first image, and specifically acquiring the first straight line segment by adopting a probability algorithm of Hough transform (Hough);

determining all second straight line segments with inclination angles smaller than or equal to a first preset angle (for example, 5 ℃) from the first straight line segments;

respectively calculating the difference value of the y coordinates of the central points of every two second straight-line segments, and classifying the second straight-line segments corresponding to the difference value smaller than or equal to a preset threshold (for example, 0.6cm) into one class;

fitting each type of second straight line segment by using a least square method to obtain a fitting straight line corresponding to each type of second straight line segment; and

and calculating the slope of each fitting straight line, and the median of all the slopes and the mean of the slopes, determining the smaller of the median and the mean as the slope of an inclined line segment in the first image, and adjusting the inclination angle of the first image according to the determined slope.

For example, three second straight line segments are obtained from the first image, and the differences between the central point y coordinate values of a (length 0.1cm, inclination angle 4 °, central point y coordinate value 0.1), b (length 0.2cm, inclination angle 3 °, central point y coordinate value 0.2), c (length 0.3cm, inclination angle 2 °, central point y coordinate value 0.3) a, b and c are all less than 0.6, so that the second straight line segments a, b and c are in the same class, then the best function matching (namely fitting straight line) of the second straight line segments in the same class is found by minimizing the sum of squares of errors by using a least square method, and the inclination angle of the to-be-first image is adjusted according to the determined smaller one, so that the corrected second image is beneficial to improving the accuracy of tensor extraction of a subsequent tensor extraction model for the first field.

When the slope corresponding to the smaller one is positive, the corresponding inclination angle is an acute angle or a right angle, the angle corresponding to the slope is subtracted from 90 degrees to obtain an angle to be corrected, and the first image is rotated anticlockwise to obtain a second image;

and when the slope corresponding to the smaller one is a negative number, the corresponding inclination angle is an obtuse angle, subtracting 90 degrees from the angle corresponding to the slope to obtain an angle to be corrected, and rotating the first image along the clockwise direction by the angle to be corrected to obtain a second image.

The processing module 120 is configured to cut the second image to obtain at least one third image including a first field, and input the third image into a pre-trained tensor extraction model to output a field tensor corresponding to the first field.

In order to eliminate the influence on the subsequent data processing caused by the interference of illumination reflection possibly suffered by the client during the shooting process, in this embodiment, the third image needs to be preprocessed before being input into the tensor extraction model. The preprocessing comprises Gaussian filtering, mean filtering, Gamma correction, histogram equalization and the like.

Taking the id card image as an example, because the content and the structure of each first field in the id card image are different, and the position coordinate of each first field in the second image is predetermined, in this embodiment, the second image is cut to obtain at least one third image, each third image corresponds to one first field (for example, a field corresponding to an id card attribute such as sex, address, id card number, etc.), each third image is respectively input to a pre-trained tensor extraction model, and a corresponding field tensor is output.

Wherein the tensor (tensor) is a multidimensional form of data storage.

The cutting of the second image is performed by using a preset field cutting algorithm, where the field cutting algorithm includes:

creating a mapping relation between a preset field template image and a cutting frame in a database in advance;

respectively inputting the second image and a preset field template image into a sliding window model, and taking the preset field template image as a sliding window, wherein the size of the sliding window is consistent with that of the preset field template image;

traversing and searching the position of the first field in the second image by using the sliding window, calculating the similarity value between the preset field template image and the image area covered by the sliding window on the second image, and marking the image area corresponding to the image area with the maximum similarity value as a marked area; and

and finding a cutting frame corresponding to the preset field template image from the database according to the mapping relation, shifting the cutting frame to the right by a preset size relative to a marking area to serve as an area where the first field is located, and taking an image obtained by cutting the area where the first field is located as the third image.

In another embodiment, the preset field cutting algorithm may also employ a horizontal projection algorithm. The second image is binarized to form a black-and-white image, a plurality of horizontal lines are constructed on the binarized black-and-white image by using a horizontal projection algorithm, the horizontal line of a black pixel point (namely, a pixel point of a text field) encountered when the second image passes through the black-and-white image is recorded as 1, and the horizontal line of a white pixel point (namely, a pixel point of a blank line) encountered when the second image passes through the black-and-white image is recorded as. And finally, determining position coordinates in the second image according to the width and the length of each third image, and cutting each third image.

In this embodiment, the tensor extraction model is obtained by training a deep convolutional neural network model, and the tensor extraction model is obtained by iteratively and optimally training the deep convolutional neural network model through inverse gradient propagation based on a sample set composed of a preset number of third images.

The tensor extraction model adopts the DenseNet of 121 layers as a main network to extract the image characteristics of the third image, has the advantages of deep network, rich characteristics and the like, removes an LSTM structure of a traditional identification network (no context information needs to be extracted from the content of an identity card), ensures that the network parameter is small, the inference speed is high, and the parallel training of multiple CPUs or GPUs is supported, and can achieve a good training effect on a small amount of training sets (20000 pieces) due to the small parameter, and the identification precision reaches 99%. The preset tensor extraction model comprises 1 convolutional layer, 1 pooling layer, 121 DenseNet layers, 1 batch normalization layer, 1 activation layer and 1 full connection layer.

In this embodiment, a convolution kernel size of 3 × 3, 5 × 5, or 7 × 7 may be chosen.

The pooling layer filter is sized 3 x 3 for reducing the size of image features, ignoring unimportant features (e.g., noise, interference, etc.), preserving important features (e.g., strokes of characters).

The DenseNet has the advantages that dense connection among layers can effectively prevent gradient disappearance, high-level image semantic features can be extracted by adopting the DenseNet with 121 layers, and the extracted features are more friendly to a character recognition task, so that the precision of subsequent character recognition is improved.

The batch normalization layer is used for adjusting data to an activation region of an activation function, preventing overfitting and accelerating network convergence. If the network does not have the batch normalization layer, the training precision is high, but the testing precision is low, an overfitting phenomenon can occur, the testing precision of the network is obviously improved after the batch normalization layer is applied, and a good overfitting resisting effect is achieved.

The active layer has the function of introducing nonlinearity so that the network has better fitting performance, the Relu function is adopted as the active function, the structure is simple, and the convergence speed is high.

Further, the inputting each third image into a pre-trained tensor extraction model, and the outputting the corresponding field tensor includes:

extracting a first feature vector of the third image by using a convolution layer and a pooling layer of the tensor extraction model;

extracting a second feature vector of the third image based on the first feature vector and a DenseNet structure and a full connection layer of the tensor extraction model; and

and uniformly dividing the second eigenvector into w parts, taking each divided eigenvector as a line tensor, and sequentially stacking each line tensor to obtain a probability distribution matrix n w as the field tensor, wherein n represents the category number of the first field character, and w represents the maximum length of the first field character. Because the maximum length of the characters of each field of the identity card image is different, the output feature vector may have the condition that the character cannot be uniformly segmented, and at the moment, a full connection layer is added between a Densenet layer and the full connection layer to obtain the feature vector adapting to the maximum length of the characters of each field, so that the character can be uniformly segmented.

Taking an identity card as an example, because the content and the structure of each field of the identity card are different, 9 different deep convolutional network models are constructed for 10 fields needing to be identified based on the same backbone network and different full-connection classification layers aiming at the characteristic of the identity card, wherein two fields of an address and an issuing authority share one model. These models differ in the value of n and w.

For the values of w: the maximum length of gender is 1; the maximum length of the identification number is 18; the maximum length of the nationality is 1 (the nationality is identified as a whole and is not identified character by character); the maximum length of a year on a birth date (year, month, day) is 4, the maximum length of a month is 1 (months are identified as a whole and not by numbers), and the maximum length of a date is 1 (same month); the maximum length of the name is 5 (the last three bits may be blank); the maximum length of the address and issuing authority is 12, because the maximum number of the 12 words can be written in one column of the address and issuing authority on the identity card; the maximum length of the effective date is 8, the field of the effective date is special, English symbols and Chinese characters are mixed together, and the occupation of two English numbers or symbols is equal to one Chinese character, so that the occupation length of one Chinese character is used as division to obtain the maximum length of the effective date which is 8, namely, the effective date at most comprises 16 English numbers or 8 Chinese characters.

For the values of n: the classification number of the classification layer of the identification model of the name, the address and the issuing authority is the number of the common Chinese characters plus a blank symbol; the classification number of the nationality is 57, the nationality classes on the identity card are 57, and the nationality classes are 56 nationalities plus the blue-green-piercing people; the classification number of sex is 2, male and female respectively; in the birthday period, the classification numbers of the year, the month and the day are respectively 10, 12 and 31 (the classification number of the year is 10 Arabic numerals, the classification number of the month is 12 months, and the classification number of the date is 31 dates); the classification number of the identification card number is 12, which is 10 Arabic numerals plus capital English letters X and a blank symbol; the classification number of effective dates is 104, because the effective dates take one Chinese character or two English or symbols as a classification unit, and the total number of all possible cases is 104.

The targeted model construction can avoid some low-level conventional errors and improve the identification accuracy. If the gender exists in both male and female cases, if a universal character recognition model is used, other recognition results (such as 'another') except for male and female may occur, and the model design of the scheme can well avoid the problem. Meanwhile, although the number of models is large, in the actual use process, after the loading initialization of the models is completed, the processing speed is not influenced because the volume of each model is small and the parallel processing can be performed.

During training of the tensor extraction model, an Adam optimizer is used, the learning rate is 0.0001, when the accuracy of the model on the test set is greater than 0.95, the learning rate is attenuated by 0.5 times in each round until the learning rate is less than 0.00001, and then the model is not attenuated. The model training adopts a cross entropy loss function, bat ch size is set to be 64, batch size is the number of samples trained in each network, loss values are generated by forward propagation during training, network weights are updated in the direction of reducing the loss values through a backward propagation algorithm, and the training is continued until the model converges.

The analyzing module 130 is configured to analyze the field tensor to obtain a second field, and use the second field as an identification result of the first field in the first image.

In this embodiment, the analyzing the field tensor to obtain the second field in the analyzing step uses a tensor analysis algorithm, where the tensor analysis algorithm includes:

traversing the probability value of each column in the probability distribution matrix corresponding to the field tensor, determining the position of the maximum value of the probability value in each column, and taking the character corresponding to the position as the field code corresponding to the column;

repeating the operation until the field tensor is completely analyzed to obtain the second field; and

and taking the second field as the identification result of the first field in the first image.

For example: in a probability distribution matrix n xw output by a tensor extraction model, the sum of the probabilities of elements in each column is 1 (namely the sum of the probabilities of all the elements in the j column is 1), the position i of the maximum value of the probability value in each column is determined by traversing the probability value of each column in the probability distribution matrix corresponding to the field tensor, and finally recognized characters are obtained according to character position mapping. For example: for the field of the birth year, 10 numbers of 0-9 are required to be identified, the maximum length is 4, a probability distribution matrix of 10 rows and 4 columns is obtained, for 10 elements of the 0 th column (in the computer identification process, the 1 st column is the 0 th column under the default normal condition), the position of the maximum probability value is obtained, if the maximum value is the 2 nd element, the second element can be mapped to the character represented by the second element, and therefore the identification result of the first field in the first image is obtained.

The checking module 140 is configured to determine whether the first field is blocked by the watermark in the second image according to a preset watermark identification rule, if so, find a preset checking field having a property consistent with that of the first field from the second image, obtain a second field corresponding to the preset checking field, determine whether the second field corresponding to the first field blocked by the watermark is the same as the second field corresponding to the preset checking field, and if so, determine that an identification result of the first field blocked by the watermark is correct.

In this embodiment, since the certificate copy is usually watermarked on the identification number, or gender, or date of birth or address location, in order to verify the data accuracy of the first field, i.e. the second field, that is masked by the watermark, the identification result of the first field needs to be verified. By utilizing the particularity of the identity card, for example, the 7 th to 14 th digits in the identity card number are 8-digit birth date information, and the birth date (year, month and day) in the identity card can be verified mutually; for example, the 17 th digit in the ID card number is a sex information bit, which can be checked with the sex field in the ID card.

Based on the particularity of the identity card and the above example, whether a second field corresponding to a first field shielded by the watermark is the same as a second field corresponding to a check field or not is calculated, if so, the identification result of the first field shielded by the watermark is correct, and if not, the identification result of the first field shielded by the watermark is corrected by taking the second field corresponding to the preset check field as a reference.

Further, the watermark position determination rule includes:

inputting the second image and a pre-created watermark template image into a sliding window model respectively, and taking the watermark template image as a sliding window, wherein the size of the sliding window is consistent with that of the watermark template image; and

and traversing and searching the position of the watermark in the second image through the sliding window, calculating the similarity value between the watermark template image and the image area covered by the sliding window on the second image, and selecting the image area corresponding to the image area with the maximum similarity value as the area shielded by the watermark, namely the first field shielded by the watermark.

In addition, the invention also provides a certificate copy information identification method. Fig. 3 is a schematic method flow diagram of an embodiment of the identification method of the document copy information of the present invention. When the processor 12 of the server 1 executes the certificate copy information identification program 10 stored in the memory 11, the following steps of the certificate copy information identification method are implemented:

s110, receiving a first image containing a certificate copy uploaded by a client, and performing tilt correction on the first image according to a predetermined correction rule to obtain a second image.

In this embodiment, the first image may be an identity card image or other document image to be recognized, an original image captured by the client, for example, a camera, or a first image captured by another capturing terminal having a capturing function and a data transmission function, and there may be an inclination angle. If the first image with the inclination angle is directly input into the subsequent tensor extraction model, the accuracy of image identification may be reduced. Therefore, in the present embodiment, before tensor extraction is performed on the first image, the second image is obtained by performing tilt correction on the first image by a predetermined correction rule.

The corrective rule comprises:

acquiring a first straight line segment with a length smaller than or equal to a first preset length (for example, 0.3cm) in the first image, and specifically acquiring the first straight line segment by adopting a probability algorithm of Hough transform (Hough);

determining all second straight line segments with inclination angles smaller than or equal to a first preset angle (for example, 5 ℃) from the first straight line segments;

respectively calculating the difference value of the y coordinates of the central points of every two second straight-line segments, and classifying the second straight-line segments corresponding to the difference value smaller than or equal to a preset threshold (for example, 0.6cm) into one class;

fitting each type of second straight line segment by using a least square method to obtain a fitting straight line corresponding to each type of second straight line segment; and

and calculating the slope of each fitting straight line, and the median of all the slopes and the mean of the slopes, determining the smaller of the median and the mean as the slope of an inclined line segment in the first image, and adjusting the inclination angle of the first image according to the determined slope.

For example, three second straight line segments are obtained from the first image, and the differences between the central point y coordinate values of a (length 0.1cm, inclination angle 4 °, central point y coordinate value 0.1), b (length 0.2cm, inclination angle 3 °, central point y coordinate value 0.2), c (length 0.3cm, inclination angle 2 °, central point y coordinate value 0.3) a, b and c are all less than 0.6, so that the second straight line segments a, b and c are in the same class, then the best function matching (namely fitting straight line) of the second straight line segments in the same class is found by minimizing the sum of squares of errors by using a least square method, and the inclination angle of the to-be-first image is adjusted according to the determined smaller one, so that the corrected second image is beneficial to improving the accuracy of tensor extraction of a subsequent tensor extraction model for the first field.

When the slope corresponding to the smaller one is positive, the corresponding inclination angle is an acute angle or a right angle, the angle corresponding to the slope is subtracted from 90 degrees to obtain an angle to be corrected, and the first image is rotated anticlockwise to obtain a second image;

and when the slope corresponding to the smaller one is a negative number, the corresponding inclination angle is an obtuse angle, subtracting 90 degrees from the angle corresponding to the slope to obtain an angle to be corrected, and rotating the first image along the clockwise direction by the angle to be corrected to obtain a second image.

And S120, cutting the second image to obtain at least one third image containing a first field, inputting the third image into a pre-trained tensor extraction model, and outputting a field tensor corresponding to the first field.

In order to eliminate the influence on the subsequent data processing caused by the interference of illumination reflection possibly suffered by the client during the shooting process, in this embodiment, the third image needs to be preprocessed before being input into the tensor extraction model. The preprocessing comprises Gaussian filtering, mean filtering, Gamma correction, histogram equalization and the like.

Taking the id card image as an example, because the content and the structure of each first field in the id card image are different, and the position coordinate of each first field in the second image is predetermined, in this embodiment, the second image is cut to obtain at least one third image, each third image corresponds to one first field (for example, a field corresponding to an id card attribute such as sex, address, id card number, etc.), each third image is respectively input to a pre-trained tensor extraction model, and a corresponding field tensor is output.

Wherein the tensor (tensor) is a multidimensional form of data storage.

The cutting of the second image is performed by using a preset field cutting algorithm, where the field cutting algorithm includes:

creating a mapping relation between a preset field template image and a cutting frame in a database in advance;

respectively inputting the second image and a preset field template image into a sliding window model, and taking the preset field template image as a sliding window, wherein the size of the sliding window is consistent with that of the preset field template image;

traversing and searching the position of the first field in the second image by using the sliding window, calculating the similarity value between the preset field template image and the image area covered by the sliding window on the second image, and marking the image area corresponding to the image area with the maximum similarity value as a marked area; and

and finding a cutting frame corresponding to the preset field template image from the database according to the mapping relation, shifting the cutting frame to the right by a preset size relative to a marking area to serve as an area where the first field is located, and taking an image obtained by cutting the area where the first field is located as the third image.

In another embodiment, the preset field cutting algorithm may also employ a horizontal projection algorithm. The second image is binarized to form a black-and-white image, a plurality of horizontal lines are constructed on the binarized black-and-white image by using a horizontal projection algorithm, the horizontal line of a black pixel point (namely, a pixel point of a text field) encountered when the second image passes through the black-and-white image is recorded as 1, and the horizontal line of a white pixel point (namely, a pixel point of a blank line) encountered when the second image passes through the black-and-white image is recorded as. And finally, determining position coordinates in the second image according to the width and the length of each third image, and cutting each third image.

In this embodiment, the tensor extraction model is obtained by training a deep convolutional neural network model, and the tensor extraction model is obtained by iteratively and optimally training the deep convolutional neural network model through inverse gradient propagation based on a sample set composed of a preset number of third images.

The tensor extraction model adopts the DenseNet of 121 layers as a main network to extract the image characteristics of the third image, has the advantages of deep network, rich characteristics and the like, removes an LSTM structure of a traditional identification network (no context information needs to be extracted from the content of an identity card), ensures that the network parameter is small, the inference speed is high, and the parallel training of multiple CPUs or GPUs is supported, and can achieve a good training effect on a small amount of training sets (20000 pieces) due to the small parameter, and the identification precision reaches 99%. The preset tensor extraction model comprises 1 convolutional layer, 1 pooling layer, 121 DenseNet layers, 1 batch normalization layer, 1 activation layer and 1 full connection layer.

In this embodiment, a convolution kernel size of 3 × 3, 5 × 5, or 7 × 7 may be chosen.

The pooling layer filter is sized 3 x 3 for reducing the size of image features, ignoring unimportant features (e.g., noise, interference, etc.), preserving important features (e.g., strokes of characters).

The DenseNet has the advantages that dense connection among layers can effectively prevent gradient disappearance, high-level image semantic features can be extracted by adopting the DenseNet with 121 layers, and the extracted features are more friendly to a character recognition task, so that the precision of subsequent character recognition is improved.

The batch normalization layer is used for adjusting data to an activation region of an activation function, preventing overfitting and accelerating network convergence. If the network does not have the batch normalization layer, the training precision is high, but the testing precision is low, an overfitting phenomenon can occur, the testing precision of the network is obviously improved after the batch normalization layer is applied, and a good overfitting resisting effect is achieved.

The active layer has the function of introducing nonlinearity so that the network has better fitting performance, the Relu function is adopted as the active function, the structure is simple, and the convergence speed is high.

Further, the inputting each third image into a pre-trained tensor extraction model, and the outputting the corresponding field tensor includes:

extracting a first feature vector of the third image by using a convolution layer and a pooling layer of the tensor extraction model;

extracting a second feature vector of the third image based on the first feature vector and a DenseNet structure and a full connection layer of the tensor extraction model; and

and uniformly dividing the second eigenvector into w parts, taking each divided eigenvector as a line tensor, and sequentially stacking each line tensor to obtain a probability distribution matrix n w as the field tensor, wherein n represents the category number of the first field character, and w represents the maximum length of the first field character. Because the maximum length of the characters of each field of the identity card image is different, the output feature vector may have the condition that the character cannot be uniformly segmented, and at the moment, a full connection layer is added between a Densenet layer and the full connection layer to obtain the feature vector adapting to the maximum length of the characters of each field, so that the character can be uniformly segmented.

Taking an identity card as an example, because the content and the structure of each field of the identity card are different, 9 different deep convolutional network models are constructed for 10 fields needing to be identified based on the same backbone network and different full-connection classification layers aiming at the characteristic of the identity card, wherein two fields of an address and an issuing authority share one model. These models differ in the value of n and w.

For the values of w: the maximum length of gender is 1; the maximum length of the identification number is 18; the maximum length of the nationality is 1 (the nationality is identified as a whole and is not identified character by character); the maximum length of a year on a birth date (year, month, day) is 4, the maximum length of a month is 1 (months are identified as a whole and not by numbers), and the maximum length of a date is 1 (same month); the maximum length of the name is 5 (the last three bits may be blank); the maximum length of the address and issuing authority is 12, because the maximum number of the 12 words can be written in one column of the address and issuing authority on the identity card; the maximum length of the effective date is 8, the field of the effective date is special, English symbols and Chinese characters are mixed together, and the occupation of two English numbers or symbols is equal to one Chinese character, so that the occupation length of one Chinese character is used as division to obtain the maximum length of the effective date which is 8, namely, the effective date at most comprises 16 English numbers or 8 Chinese characters.

For the values of n: the classification number of the classification layer of the identification model of the name, the address and the issuing authority is the number of the common Chinese characters plus a blank symbol; the classification number of the nationality is 57, the nationality classes on the identity card are 57, and the nationality classes are 56 nationalities plus the blue-green-piercing people; the classification number of sex is 2, male and female respectively; in the birthday period, the classification numbers of the year, the month and the day are respectively 10, 12 and 31 (the classification number of the year is 10 Arabic numerals, the classification number of the month is 12 months, and the classification number of the date is 31 dates); the classification number of the identification card number is 12, which is 10 Arabic numerals plus capital English letters X and a blank symbol; the classification number of effective dates is 104, because the effective dates take one Chinese character or two English or symbols as a classification unit, and the total number of all possible cases is 104.

The targeted model construction can avoid some low-level common sense errors and improve the identification accuracy. If the gender exists in both male and female cases, if a universal character recognition model is used, other recognition results (such as 'another') except for male and female may occur, and the model design of the scheme can well avoid the problem. Meanwhile, although the number of models is large, in the actual use process, after the loading initialization of the models is completed, the processing speed is not influenced because the volume of each model is small and the parallel processing can be performed.

During training of the tensor extraction model, an Adam optimizer is used, the learning rate is 0.0001, when the accuracy of the model on the test set is greater than 0.95, the learning rate is attenuated by 0.5 times in each round until the learning rate is less than 0.00001, and then the model is not attenuated. The model training adopts a cross entropy loss function, bat ch size is set to be 64, batch size is the number of samples trained in each network, loss values are generated by forward propagation during training, network weights are updated in the direction of reducing the loss values through a backward propagation algorithm, and the training is continued until the model converges.

And S130, analyzing the field tensor to obtain a second field, and taking the second field as the identification result of the first field in the first image.

In this embodiment, the analyzing the field tensor to obtain the second field in the analyzing step uses a tensor analysis algorithm, where the tensor analysis algorithm includes:

traversing the probability value of each column in the probability distribution matrix corresponding to the field tensor, determining the position of the maximum value of the probability value in each column, and taking the character corresponding to the position as the field code corresponding to the column;

repeating the operation until the field tensor is completely analyzed to obtain the second field; and

and taking the second field as the identification result of the first field in the first image.

For example: in a probability distribution matrix n xw output by a tensor extraction model, the sum of the probabilities of elements in each column is 1 (namely the sum of the probabilities of all the elements in the j column is 1), the position i of the maximum value of the probability value in each column is determined by traversing the probability value of each column in the probability distribution matrix corresponding to the field tensor, and finally recognized characters are obtained according to character position mapping. For example: for the field of the birth year, 10 numbers of 0-9 are required to be identified, the maximum length is 4, a probability distribution matrix of 10 rows and 4 columns is obtained, for 10 elements of the 0 th column (in the computer identification process, the 1 st column is the 0 th column under the default normal condition), the position of the maximum probability value is obtained, if the maximum value is the 2 nd element, the second element can be mapped to the character represented by the second element, and therefore the identification result of the first field in the first image is obtained.

S140, judging whether the first field is shielded by the watermark in the second image according to a preset watermark identification rule, if so, searching a preset check field with the attribute consistent with that of the first field from the second image, acquiring a second field corresponding to the preset check field, judging whether the second field corresponding to the first field shielded by the watermark is the same as the second field corresponding to the preset check field, and if so, judging that the identification result of the first field shielded by the watermark is correct.

In this embodiment, since the certificate copy is usually watermarked on the identification number, or gender, or date of birth or address location, in order to verify the data accuracy of the first field, i.e. the second field, that is masked by the watermark, the identification result of the first field needs to be verified. By utilizing the particularity of the identity card, for example, the 7 th to 14 th digits in the identity card number are 8-digit birth date information, and the birth date (year, month and day) in the identity card can be verified mutually; for example, the 17 th digit in the ID card number is a sex information bit, which can be checked with the sex field in the ID card.

Based on the particularity of the identity card and the above example, whether a second field corresponding to a first field shielded by the watermark is the same as a second field corresponding to a check field or not is calculated, if so, the identification result of the first field shielded by the watermark is correct, and if not, the identification result of the first field shielded by the watermark is corrected by taking the second field corresponding to the preset check field as a reference.

Further, the watermark position determination rule includes:

inputting the second image and a pre-created watermark template image into a sliding window model respectively, and taking the watermark template image as a sliding window, wherein the size of the sliding window is consistent with that of the watermark template image; and

and traversing and searching the position of the watermark in the second image through the sliding window, calculating the similarity value between the watermark template image and the image area covered by the sliding window on the second image, and selecting the image area corresponding to the image area with the maximum similarity value as the area shielded by the watermark, namely the first field shielded by the watermark.

In addition, the embodiment of the present invention further provides a computer-readable storage medium, which may be any one of or any combination of a hard disk, a multimedia card, an SD card, a flash memory card, an SMC, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, and the like. The computer readable storage medium includes a certificate copy information identification program 10, and the specific implementation of the computer readable storage medium of the present invention is substantially the same as the specific implementation of the certificate copy information identification method and the server 1, and will not be described herein again.

It should be noted that the sequence of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.

The above description of the embodiments of the present invention is for illustrative purposes only and does not represent the merits of the embodiments. Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical embodiments of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.

The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于深度学习的圆形表盘识别方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!