method for making cartoon image on human face, electronic device and storage medium

文档序号:1720149 发布日期:2019-12-17 浏览:12次 中文

阅读说明:本技术 人脸漫画形象制作方法、电子装置和存储介质 (method for making cartoon image on human face, electronic device and storage medium ) 是由 林忠亿 于 2018-06-07 设计创作,主要内容包括:一种人脸漫画形象制作方法、电子装置和存储介质,该方法包括:提取人脸图像中的特征信息;根据所述特征信息将所述人脸图像分割成不同的特征部位;计算分割后的特征部位的面积;判断所述分割后的特征部位是否为突出部位;当所述分割后的特征部位为突出部位时,将所述人脸图像中的所有突出部位按照突出程度按从大到小的顺序排列;及对突出程度靠前的预设个数突出部位进行相关处理。实施本发明,可以根据人脸图像绘制出生动形象的漫画形象。(A method for making a human face cartoon image, an electronic device and a storage medium, the method comprises: extracting feature information in the face image; segmenting the face image into different feature parts according to the feature information; calculating the area of the segmented characteristic part; judging whether the segmented characteristic part is a protruding part; when the segmented characteristic parts are protruding parts, arranging all the protruding parts in the face image according to the protruding degree in the descending order; and carrying out related processing on the preset number of protruding parts with the protruding degrees before. By implementing the method and the device, the cartoon image of the vivid image can be drawn according to the face image.)

1. a method for making a human face cartoon image is applied to an electronic device, and is characterized by comprising the following steps:

Extracting feature information in the face image;

Segmenting the face image into different feature parts according to the feature information;

Calculating the area of the segmented characteristic part;

Judging whether the segmented characteristic part is a protruding part;

When the segmented characteristic parts are protruding parts, arranging all the protruding parts in the face image according to the protruding degree in the descending order; and

And carrying out relevant processing on the preset number of protruding parts with the protruding degrees before.

2. the method of claim 1, wherein prior to extracting feature information from the face image, the method further comprises:

Acquiring a face image;

And preprocessing the face image.

3. the method of making a human face caricature image of claim 2, wherein the preprocessing comprises normalization of geometric characteristics of the human face image.

4. The method for producing a human face cartoon image according to claim 1 or 3, characterized in that said correlation process includes a zoom-in process, a zoom-out process and a morphing and exaggeration process.

5. The method for making a human face cartoon image according to claim 1, characterized in that the area of said characteristic portion is compared with the average value of the areas of the characteristic portions to judge whether the characteristic portion is a protruded portion;

When the absolute value of the difference between the area of the characteristic part and the average value of the area of the characteristic part is greater than or equal to a preset value, taking the characteristic part as a protruding part;

And when the absolute value of the difference between the area of the characteristic part and the average value of the areas of the characteristic parts is smaller than the preset value, taking the characteristic part as a non-protruding part.

6. The method for making a human face cartoon image according to claim 5, characterized in that the average value of the areas of the characteristic parts is calculated from the big data of the human face image stored in the database of the electronic device.

7. The method for making a human-face caricatured image according to claim 1, wherein the degree of prominence is a ratio of an absolute value of a difference between the area of the feature portion and an average of the areas of the corresponding feature portions to the area of the feature portion.

8. the method for making a human face caricature image of claim 1, further comprising:

And outputting the processed human face image and displaying the human face image on a display screen of the electronic device.

9. An electronic device, comprising:

A processor; and

A memory having stored therein a plurality of program modules that are loaded by the processor and execute the method of making a human face caricature image of any of claims 1-8.

10. a storage medium having stored thereon at least one computer instruction, wherein the instruction is loaded by a processor and executed to perform the method of making a human face caricature image according to any one of claims 1-8.

Technical Field

The invention relates to the technical field of image processing, in particular to a method for making a human face cartoon image, an electronic device and a storage medium.

background

in recent years, there have been many studies on automation of artistic work programs, including image-based processing methods. The computer simulates the pen touch of a painting tool (such as a pencil, an oil painting pen and the like) so as to generate a pencil painting, an oil painting and the like. However, the methods are not exaggerated, and the images tend to be real and are not interesting.

Disclosure of Invention

In view of the above, there is a need for a method, an electronic device and a storage medium for creating a cartoon image of a human face, which can draw a cartoon image of an animated image according to a human face image.

A method for drawing a cartoon image on a human face is applied to an electronic device and comprises the following steps:

Extracting feature information in the face image;

Segmenting the face image into different feature parts according to the feature information;

calculating the area of the segmented characteristic part;

judging whether the segmented characteristic part is a protruding part;

When the segmented characteristic parts are protruding parts, arranging all the protruding parts in the face image according to the protruding degree in the descending order; and

And carrying out relevant processing on the preset number of protruding parts with the protruding degrees before.

Further, before extracting the feature information in the face image, the method further comprises:

Acquiring a face image;

and preprocessing the face image.

Further, the preprocessing includes normalization processing of geometric characteristics of the face image.

further, the correlation processing includes enlargement processing, reduction processing, and distortion exaggeration processing.

Further, comparing the area of the characteristic part with the average value of the areas of the characteristic parts to judge whether the characteristic part is a protruding part;

When the absolute value of the difference between the area of the characteristic part and the average value of the area of the characteristic part is greater than or equal to a preset value, taking the characteristic part as a protruding part;

And when the absolute value of the difference between the area of the characteristic part and the average value of the areas of the characteristic parts is smaller than the preset value, taking the characteristic part as a non-protruding part.

Further, the average value of the areas of the characteristic parts is calculated by the big data of the face image stored in the database of the electronic device.

Further, the protrusion degree refers to a ratio of an absolute value of a difference between the area of the feature and an average value of the areas of the corresponding features to the area of the feature.

Further, the method further comprises:

And outputting the processed human face image and displaying the human face image on a display screen of the electronic device.

An electronic device, the electronic device comprising:

A processor; and

The memory is stored with a plurality of program modules, and the program modules are loaded by the processor and execute the human face cartoon image making method.

A storage medium having stored thereon at least one computer instruction for execution by a processor and loaded for execution by a human face caricature image creation method as described above.

Compared with the prior art, the method for making the human face cartoon image, the electronic device and the storage medium provided by the invention can compare the area of each characteristic part in the human face image with the average value of the areas of the corresponding parts to determine whether the part is a protruded part. When the part is a protruding part, arranging the protruding parts according to the protruding degrees in a descending order, carrying out related processing on a preset number of protruding parts with the protruding degrees being earlier, and outputting the processed human face image to obtain the human face cartoon image. The cartoon image with vivid image can be drawn according to the face image, and the cartoon image is interesting.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.

Fig. 1 is an application environment diagram of a preferred embodiment of the system for making a human face cartoon image of the invention.

Fig. 2 is a functional block diagram of a preferred embodiment of the system for making a human face cartoon image of the invention.

FIG. 3 is a flowchart of a preferred embodiment of the method for making a cartoon image of a human face according to the present invention.

Fig. 4 is a schematic view of a face image.

Description of the main elements

Electronic device 1

Display screen 11

Network unit 12

memory 13

processor 14

Database 15

Human face cartoon image making system 10

Acquisition module 101

Segmentation module 102

Computing module 103

Comparison module 104

Processing module 105

Display module 106

The following detailed description will further illustrate the invention in conjunction with the above-described figures.

Detailed Description

In order that the above objects, features and advantages of the present invention can be more clearly understood, a detailed description of the present invention will be given below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.

In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention, and the described embodiments are merely a subset of the embodiments of the present invention, rather than a complete embodiment. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.

Referring to fig. 1, an application environment of a system 10 for making a cartoon image of a human face according to an embodiment of the present invention is shown. The system 10 for making the cartoon image of the human face is applied to the electronic device 1. The electronic device 1 includes, but is not limited to, a display 11, a network unit 12, a memory 13, and a processor 14. The above elements are electrically connected with each other. It should be understood that the present embodiment is only a simple illustration of the structure of the electronic device 1, and although not shown, the electronic device 1 may also include other components for implementing the functions of the electronic device 1, such as a circuit system, an I/O interface, a battery, an operating system, and the like. In the present embodiment, the electronic apparatus 1 may be, but is not limited to, an electronic device such as a smartphone, a tablet pc, a desktop computer, or a kiosk.

In the present embodiment, the Display screen 11 may have a touch function, such as a Liquid Crystal Display (LCD) Display screen or an Organic Light-Emitting Diode (OLED) Display screen. The display screen 11 is used for displaying contents such as pictures.

In the present embodiment, the memory 13 is used for storing software programs and data installed in the electronic device 1. In this embodiment, the storage 13 may be an internal storage unit of the electronic device 1, such as a hard disk or a memory of the electronic device 1. In other embodiments, the Memory 13 includes a Read-Only Memory (ROM), a Random Access Memory (RAM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an electronically Erasable rewritable Read-Only Memory (EEPROM), a Compact Disc-Read-Only Memory (CD-ROM) or other optical disk storage, magnetic disk storage, tape storage, or any other medium readable by a computer that can be used to carry or store data. In the present embodiment, the memory 13 stores the system 10 for creating a cartoon image of a human face. The system 10 may compare the area of each feature portion in the face image with the average value of the areas of the corresponding portions to determine whether the portion is a salient portion. When the part is a protruding part, arranging the protruding parts according to the protruding degrees in a descending order, carrying out related processing on a preset number of protruding parts with the protruding degrees being earlier, and outputting the processed human face image to obtain the human face cartoon image.

In this embodiment, the processor 14 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same function or different functions, and includes one or more Central Processing Units (CPUs), a microprocessor, a digital Processing chip, a graphics processor, a combination of various control chips, and the like. The at least one processor 14 is a Control Unit (Control Unit) of the electronic apparatus 1, connects various components of the electronic apparatus 1 by using various interfaces and lines, and executes various functions and processing data of the electronic apparatus 1, such as a function of making a human face cartoon image, by running or executing programs or modules stored in the memory 13 and calling data stored in the memory 13.

In this embodiment, the electronic device 1 further includes a database 15, and the database 15 stores a large amount of face image data, and an average value of the area of each part in the face image can be calculated according to the large amount of face image data. Since the face has an absolute position difference and an image size difference in the image, a standardized alignment operation is required for a large number of face images stored in the database 15. The standardized alignment operation means that each face image is made to be consistent in size and direction as much as possible by performing proper translation, scaling and rotation on the face image.

The specific calculation of the area of each part in the face image is described in detail below, and the average value of the area of each part in the face image in the present scheme is the ratio of the sum of the areas of each part in all the face image data to the total number of the face image data. For example, one million face images are stored in the database 15, the area of the eyes in each face image may be calculated first, and the average value of the eye area is obtained according to the ratio between the sum of the areas of the eyes of the one million face images and one million.

Referring to fig. 2, the system 10 for making a cartoon image of a human face may be divided into one or more modules stored in the memory 13 and configured to be executed by one or more processors (in this embodiment, a processor 14) to implement the present invention. For example, the system 10 is divided into an acquisition module 101, a division module 102, a calculation module 103, a comparison module 104, a processing module 105 and a display module 106. The modules referred to in the present invention are program segments capable of performing a specific function, and are more suitable than programs for describing the execution process of software in the electronic device 1, and the detailed functions of each module will be described in detail in the flow chart of fig. 3 later.

The acquiring module 101 is configured to acquire a face image.

In this embodiment, the acquiring module 101 may acquire a face image based on a video recorded by a camera (not shown in the figure). When a video image is processed, a face detection algorithm is adopted to detect each frame image in the video image so as to determine whether the face image appears in the video image. When a face image appears in the video image, the obtaining module 101 may store the frame image to obtain the face image. The acquisition module 101 may also acquire a face image based on an image acquired by a camera. In this embodiment, the acquiring module 101 detects a face image in the video image and the image captured by the camera by using a face detection algorithm.

the face detection algorithm may be one or more of the following algorithms in combination: a template-based face detection method, an artificial neural network-based face detection method, a model-based face detection method, a skin color-based face detection method, or a feature sub-face-based face detection method, etc.

The obtaining module 101 is further configured to pre-process the face image. In this embodiment, the preprocessing includes normalization processing of geometric characteristics of the face image. The normalization process of the geometric characteristics can normalize the face image to the same position, angle and size. Since the distance between the two eyes of a person is substantially the same for most people, the positions of the two eyes are usually used as the basis for geometric normalization of the face image.

suppose the positions of two eyes in the face image are respectively ElAnd Er(as shown in fig. 4), geometric normalization of the face image can be achieved by the following steps:

a) rotating the face image to make ElAnd Erof (2) a connection lineAnd kept horizontal. The consistency of the face direction is ensured, and the rotation invariance of the face in the image plane is reflected;

b) and cutting the face image according to a certain proportion. For example, point O in the figure is set toAt a midpoint of, andBy clipping, in a 2d × 2d image, it can be guaranteed that the point O is fixed at (0.5d, d). The consistency of the face position is ensured, and the translation invariance of the face in the image plane is reflected;

c) And reducing and amplifying the cut image to obtain a standard image with uniform size. For example, if the size of the prescribed image is 128 × 128 pixels, that is, the prescribed image size is set to be 128 × 128 pixelsFor a fixed length (64 pixels), the scaling factor β is 2 d/128. The consistency of the sizes of the human faces is ensured, and the scale invariance of the human faces in the image plane is reflected.

The obtaining module 101 may further extract feature information in the preprocessed face image.

In this embodiment, the obtaining module 101 extracts feature information in the preprocessed face image by using a face feature extraction algorithm. The characteristic information comprises information of eyes, nose, mouth, eyebrows, cheeks and the like.

In other embodiments, the characteristic information further includes information such as cheekbones, human middle, moles, and ears.

In this embodiment, the face feature extraction algorithm may be one or a combination of the following algorithms: gabor features, Histogram of Oriented Gradient (HOG), Local Binary Patterns (LBP), Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Independent Component Analysis (ICA), and the like.

In the present embodiment, the face detection algorithm and the face feature extraction algorithm are not limited to those listed above, and any algorithm suitable for detecting a face region and any algorithm for extracting face feature information may be applied thereto. In addition, the face detection algorithm and the face feature extraction algorithm in this embodiment are both prior art, and are not described in detail herein.

The segmentation module 102 is configured to segment the preprocessed face image into different feature portions according to the feature information. In this embodiment, the segmentation module 102 may segment the preprocessed face image into an eye portion, a nose portion, a mouth portion, an eyebrow portion, a cheek portion, a cheekbone portion, an ear portion, and the like. In this embodiment, the preprocessed face image may be segmented in an interactive manner; step segmentation, namely initial segmentation and boundary local segmentation can also be carried out; and segmenting the preprocessed face image by adopting an edge detection method.

The calculation module 103 calculates the area of the segmented feature. In this embodiment, the calculating module 103 may calculate pixel points of the feature portion in the preprocessed face image, and then perform conversion according to corresponding scale calibration to obtain the area of the feature portion.

The comparison module 104 compares the area of the feature with the average value of the areas of the feature to determine whether the feature is a protruding portion.

In the present embodiment, the average value of the areas of the characteristic portions is stored in the database 15 in advance.

When the absolute value of the difference between the area of the feature portion and the average value of the areas of the feature portion is greater than or equal to a preset value, the comparison module 104 takes the feature portion as a protruding portion; when the absolute value of the difference between the area of the feature portion and the average value of the areas of the feature portion is smaller than the preset value, the comparison module 104 takes the feature portion as a non-protruding portion.

For example, when the absolute value of the difference between the area of the segmented eye portion and the average value of the areas of the eyes stored in the database 15 is greater than or equal to the preset value, the comparison module 104 takes the segmented eye portion as the salient portion.

The processing module 105 arranges the protruding parts according to the protruding degrees in a descending order, and selects a preset number of protruding parts with the protruding degrees before.

In the present embodiment, the protrusion degree refers to a ratio of an absolute value of a difference between the area of the feature and an average value of the areas of the corresponding features to the area of the feature.

the processing module 105 is further configured to perform relevant processing on a preset number of protruding portions with a previous protruding degree, and perform no processing on other portions.

In the present embodiment, the correlation processing includes enlargement processing, reduction processing, distortion exaggeration processing, and the like.

The display module 106 outputs the processed face image and displays the face image on the display screen 11.

As shown in fig. 3, a flowchart of a method for making a cartoon image of a human face according to a preferred embodiment of the present invention is shown. The order of the steps in the flow chart may be changed, and some steps may be omitted or combined according to different requirements.

In step S31, the obtaining module 101 is configured to obtain a face image.

In this embodiment, the acquiring module 101 may acquire a face image based on a video recorded by a camera (not shown in the figure). When a video image is processed, a face detection algorithm is adopted to detect each frame image in the video image so as to determine whether the face image appears in the video image. When a face image appears in the video image, the obtaining module 101 may store the frame image to obtain the face image. The acquisition module 101 may also acquire a face image based on an image acquired by a camera. In this embodiment, the acquiring module 101 detects a face image in the video image and the image captured by the camera by using a face detection algorithm.

The face detection algorithm may be one or more of the following algorithms in combination: a template-based face detection method, an artificial neural network-based face detection method, a model-based face detection method, a skin color-based face detection method, or a feature sub-face-based face detection method, etc.

In step S32, the obtaining module 101 is further configured to pre-process the face image. In this embodiment, the preprocessing includes normalization processing of geometric characteristics of the face image.

Suppose the positions of two eyes in the face image are respectively Eland Er(as shown in fig. 4), geometric normalization of the face image can be achieved by the following steps:

a) Rotating the face image to make Eland ErOf (2) a connection lineand kept horizontal. The consistency of the face direction is ensured, and the rotation invariance of the face in the image plane is reflected;

b) And cutting the face image according to a certain proportion. For example, point O in the figure is set toat a midpoint of, andby clipping, in a 2d × 2d image, it can be guaranteed that the point O is fixed at (0.5d, d). The consistency of the face position is ensured, and the translation invariance of the face in the image plane is reflected;

c) And reducing and amplifying the cut image to obtain a standard image with uniform size. For example, if the size of the prescribed image is 128 × 128 pixels, that is, the prescribed image size is set to be 128 × 128 pixelsfor a fixed length (64 pixels), the scaling factor β is 2 d/128. The consistency of the sizes of the human faces is ensured, and the scale invariance of the human faces in the image plane is reflected.

In step S33, the obtaining module 101 extracts feature information in the preprocessed face image.

In this embodiment, the obtaining module 101 extracts feature information in the preprocessed face image by using a face feature extraction algorithm. The characteristic information comprises information of eyes, nose, mouth, eyebrows, cheeks and the like.

In other embodiments, the characteristic information further includes information such as cheekbones, human middle, moles, and ears.

in this embodiment, the face feature extraction algorithm may be one or a combination of the following algorithms: gabor features, Histogram of Oriented Gradient (HOG), Local Binary Patterns (LBP), Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Independent Component Analysis (ICA), and the like.

In step S34, the segmentation module 102 is configured to segment the preprocessed face image into different feature portions according to the feature information. In this embodiment, the segmentation module 102 may segment the preprocessed face image into an eye portion, a nose portion, a mouth portion, an eyebrow portion, a cheek portion, a cheekbone portion, an ear portion, and the like. In this embodiment, the preprocessed face image may be segmented in an interactive manner; step segmentation, namely initial segmentation and boundary local segmentation can also be carried out; and segmenting the preprocessed face image by adopting an edge detection method.

in step S35, the calculation module 103 calculates the area of the segmented feature. In this embodiment, the calculating module 103 may calculate pixel points of the feature portion in the preprocessed face image, and then perform conversion according to corresponding scale calibration to obtain the area of the feature portion.

in step S36, the comparison module 104 compares the area of the feature with the average value of the areas of the feature to determine whether the feature is a protruding portion.

When the absolute value of the difference between the area of the feature portion and the average value of the areas of the feature portion is greater than or equal to a preset value, the comparing module 104 takes the feature portion as a protruding portion, and the process proceeds to step S37; when the absolute value of the difference between the area of the feature portion and the average value of the areas of the feature portions is smaller than the preset value, the comparison module 104 takes the feature portion as a non-protruding portion, and the process is ended.

in step S37, the processing module 105 arranges the protruding parts in descending order according to the protruding degree.

In this embodiment, the processing module 105 further selects a preset number of protruding portions with a front protruding degree. For example, two protruding portions with the front protruding degree are selected.

The degree of protrusion refers to the ratio of the absolute value of the difference between the area of the feature and the average of the areas of its corresponding features to the area of the feature.

In step S38, the processing module 105 performs a correlation process on the preset number of protruding portions with the previous protruding degree.

In the present embodiment, the correlation processing includes enlargement processing, reduction processing, distortion exaggeration processing, and the like.

in step S39, the display module 106 outputs the processed face image and displays the face image on the display screen 11.

through steps S31 to S39, the area of each feature in the face image may be compared with the average of the areas of the corresponding parts to determine whether the part is a salient part. When the part is a protruding part, arranging the protruding parts according to the protruding degrees in a descending order, carrying out related processing on a preset number of protruding parts with the protruding degrees being earlier, and outputting the processed human face image to obtain the human face cartoon image.

In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.

it will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or that the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种数据处理方法、装置和用于数据处理的装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!