Image processing method, image processing device, electronic equipment and storage medium

文档序号:73232 发布日期:2021-10-01 浏览:10次 中文

阅读说明:本技术 图像处理方法、装置、电子设备及存储介质 (Image processing method, image processing device, electronic equipment and storage medium ) 是由 李卫星 于 2021-08-16 设计创作,主要内容包括:本申请公开了一种图像处理方法、装置、电子设备及存储介质,属于图像处理技术领域。该方法包括获取摄像头采集的第一图像帧和第二图像帧;通过第一图像处理参数对第一图像帧中的第一面部图像进行图像处理,并通过第二图像处理参数对第二图像帧中的第二面部图像进行图像处理;将图像处理后的第一图像帧和第二图像帧进行图像合成,得到目标图像帧;其中,第一面部图像为第一性别对应的第一人物对象的面部图像,第二面部图像为第二性别对应的第二人物对象的面部图像。采用不同性别图像处理参数对不同图像帧中不同性别对应的面部图像进行图像处理,避免图像帧出现图像处理失真的情况,从而提高用户对目标图像帧的满意度。(The application discloses an image processing method, an image processing device, electronic equipment and a storage medium, and belongs to the technical field of image processing. The method comprises the steps of obtaining a first image frame and a second image frame which are collected by a camera; performing image processing on a first face image in the first image frame through the first image processing parameter, and performing image processing on a second face image in the second image frame through the second image processing parameter; carrying out image synthesis on the first image frame and the second image frame after image processing to obtain a target image frame; the first face image is a face image of a first person object corresponding to a first gender, and the second face image is a face image of a second person object corresponding to a second gender. The image processing parameters of different genders are adopted to carry out image processing on the face images corresponding to different genders in different image frames, so that the image processing distortion of the image frames is avoided, and the satisfaction degree of a user on a target image frame is improved.)

1. An image processing method, characterized in that the method comprises:

acquiring a first image frame and a second image frame acquired by a camera;

performing image processing on a first face image in the first image frame by using a first image processing parameter, and performing image processing on a second face image in the second image frame by using a second image processing parameter;

carrying out image synthesis on the first image frame and the second image frame after image processing to obtain a target image frame;

wherein the first facial image is a facial image of a first human subject, the first human subject is of a first gender, the second facial image is a facial image of a second human subject, the second human subject is of a second gender, and the first gender and the second gender are different.

2. The method of claim 1, wherein the acquiring the first image frame and the second image frame captured by the camera comprises:

under the condition that the performance parameters of the electronic equipment meet a first preset condition, a first image frame and a second image frame acquired by a camera are acquired.

3. The method of claim 2, further comprising:

determining an image processing weight of each human object in the first image frame if the performance parameter of the electronic device does not satisfy the first preset condition, wherein the image processing weight is used for indicating the proportion of a face image of the human object as an image processing object;

calculating a target image processing parameter based on a preset image processing parameter and the image processing weight of each character object;

and performing image processing on the face image of each human object in the first image frame through the target image processing parameters to obtain a target image frame.

4. The method of claim 3, wherein determining the image processing weight for each human object in the first image frame comprises at least one of:

calculating a distance weight of each of the face images of the human subjects based on coordinates of the face image of each of the human subjects in the first image frame;

calculating an area weight of the face image of each of the human subjects based on the area of the face image of each of the human subjects;

and calculating the shooting angle weight of the face image of each human object based on the shooting angle data of the face image of each human object.

5. The method of claim 4, wherein calculating the distance weight for each of the human subject's facial images based on the coordinates of each of the human subject's facial images in the first image frame comprises:

calculating a first distance between coordinates of each of the human subjects' face images in the first image frame and center coordinates of the first image frame;

calculating a first difference value between a preset distance and the first distance;

calculating the ratio of the first difference value to the preset distance to obtain the distance weight of each person object;

the calculating an area weight of the face image of each of the human subjects based on the area of the face image of each of the human subjects includes:

calculating the ratio of the area of the face image of each person object to the area of the first image frame to obtain the area weight of each person object;

the calculating of the photographing angle weight of the face image of each of the human subjects based on the photographing angle data of the face image of each of the human subjects includes:

calculating a second distance between the coordinates of the left eye and the coordinates of the centers of both eyes and a third distance between the coordinates of the right eye and the coordinates of the centers of both eyes in the face image of each human subject based on the coordinates of the eyes in the photographing angle data of each human subject;

calculating a maximum of the second distance and the third distance and a second difference of the second distance and the third distance;

and calculating a third difference value of the maximum value and the second difference value and a ratio of the third difference value to the maximum value to obtain the shooting angle weight.

6. The method of claim 3, wherein before performing image processing on the facial image of each of the human subjects in the first image frame by the target image processing parameters to obtain a target image frame, the method further comprises:

determining a target face image of a target human object satisfying a second preset condition in the first image frame;

wherein the second preset condition comprises at least one of: the area ratio of the face image of the person object in the first image frame is larger than a first preset threshold value, and the distance between the face image of the person object and the edge of the first image frame is larger than a second preset threshold value;

the image processing the face image of each human object in the first image frame through the target image processing parameter to obtain a target image frame includes:

and carrying out image processing on the target face image of the target person object through the target image processing parameters to obtain a target image frame.

7. The method according to claim 3, wherein the preset image processing parameters comprise a first image processing parameter and a second image processing parameter;

the calculating of the target image processing parameters based on preset image processing parameters and the image processing weight of each person object comprises:

calculating the sum of image processing weights of a first human object in the first image frame to obtain a first total weight;

calculating the sum of the image processing weights of the second person object in the first image frame to obtain a second total weight;

calculating an adjustment coefficient based on the first total weight and the second total weight;

calculating a first product of the adjustment coefficient and a first image processing sub-parameter to obtain a first target image processing parameter; wherein the first image processing sub-parameter is a parameter for performing image processing on first type facial features of the first gender and the second gender, the first type facial features are the same facial features of the first gender and the second gender, and image processing parameter values of the first type facial features are different;

calculating a second product of the adjustment coefficient and a second image processing sub-parameter to obtain a second target image processing parameter; the second image processing sub-parameter is a parameter for performing image processing on a second type of facial features of the first gender and the second gender, and the second type of facial features are different facial features of the first gender and the second gender;

determining a target image processing parameter based on the first target image processing parameter, the second target image processing parameter and a third image processing sub-parameter; the third image processing sub-parameter is a parameter for performing image processing on a third type of facial features of the first gender and the second gender, the third type of facial features are the same facial features of the first gender and the second gender, and image processing parameter values of the third type of facial features are the same.

8. An image processing apparatus, characterized in that the apparatus comprises:

the acquisition module is used for acquiring a first image frame and a second image frame acquired by a camera;

the first image processing module is used for carrying out image processing on a first face image in the first image frame through a first image processing parameter and carrying out image processing on a second face image in the second image frame through a second image processing parameter;

the synthesis module is used for carrying out image synthesis on the first image frame and the second image frame after image processing to obtain a target image frame;

wherein the first facial image is a facial image of a first human subject, the first human subject is of a first gender, the second facial image is a facial image of a second human subject, the second human subject is of a second gender, and the first gender and the second gender are different.

9. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which program or instructions, when executed by the processor, implement the steps of the image processing method according to any one of claims 1 to 7.

10. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method according to any one of claims 1 to 7.

Technical Field

The present application belongs to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.

Background

With the development of electronic devices, most of the shooting programs of the electronic devices have an image processing function, and the image processing function has become an important function frequently used in daily life of users.

At present, there are female and male people in an image captured by a user, and when an image of the face of a person in such an image is processed, the face image of the male person is often distorted. This results in an image that is not generally satisfactory to the user after image processing.

Disclosure of Invention

An object of the embodiments of the present application is to provide an image processing method, an image processing apparatus, an electronic device, and a storage medium, which can solve a problem that an image after image processing is not a satisfactory image for a user.

In order to solve the technical problem, the present application is implemented as follows:

in a first aspect, an embodiment of the present application provides an image processing method, including:

acquiring a first image frame and a second image frame acquired by a camera;

performing image processing on a first face image in the first image frame by using a first image processing parameter, and performing image processing on a second face image in the second image frame by using a second image processing parameter;

carrying out image synthesis on the first image frame and the second image frame after image processing to obtain a target image frame;

wherein the first facial image is a facial image of a first human subject, the first human subject is of a first gender, the second facial image is a facial image of a second human subject, the second human subject is of a second gender, and the first gender and the second gender are different.

In a second aspect, an embodiment of the present application provides an apparatus for image processing, including:

the acquisition module is used for acquiring a first image frame and a second image frame acquired by a camera;

the first image processing module is used for carrying out image processing on a first face image in the first image frame through a first image processing parameter and carrying out image processing on a second face image in the second image frame through a second image processing parameter;

the synthesis module is used for carrying out image synthesis on the first image frame and the second image frame after image processing to obtain a target image frame;

wherein the first facial image is a facial image of a first human subject, the first human subject is of a first gender, the second facial image is a facial image of a second human subject, the second human subject is of a second gender, and the first gender and the second gender are different.

In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.

In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.

In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the method according to the first aspect.

In the embodiment of the application, the image processing is performed on the face image of the first human object of the first gender in the first image frame through the first image processing parameter, the image processing is performed on the face image of the second human object of the second gender in the second image frame through the second image processing parameter, and the target image frame is obtained through synthesis based on the first image frame and the second image frame after the image processing. Therefore, the face images of the person objects with different genders in different image frames are subjected to image processing through different image processing parameters, so that the situation of distortion of the image frames can be effectively avoided, and thus, the situation of distortion can not occur on the basis of the target image frame obtained by two image frames without distortion after image processing, and the satisfaction degree of a user on the target image frame can be improved.

Drawings

Fig. 1 is a schematic flowchart of an image processing method provided in an embodiment of the present application;

fig. 2 is a schematic diagram of a first image frame provided by an embodiment of the present application;

FIG. 3 is a schematic diagram of a shooting angle of a face image of a human subject according to an embodiment of the present application;

FIG. 4 is a schematic diagram of another shooting angle of a face image of a human subject provided in an embodiment of the present application;

FIG. 5 is a schematic diagram of another shooting angle of a face image of a human subject provided in an embodiment of the present application;

FIG. 6 is a flowchart illustrating an embodiment of a scene of an image processing method according to an embodiment of the present application;

fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;

fig. 8 is a hardware configuration diagram of an electronic device implementing an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.

The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.

Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application. The image processing method can be applied to electronic equipment. As shown in fig. 1, the image processing method may include:

step 101, acquiring a first image frame and a second image frame acquired by a camera.

The first image frame and the second image frame may be adjacent image frames, and the second image frame may be an adjacent image frame forward of the first image frame or an adjacent image frame backward of the first image frame.

The first image frame and the second image frame may be two adjacent image frames acquired by the camera during shooting and previewing, or adjacent image frames acquired by the camera during video previewing. In other words, the electronic device may acquire the first image frame and the second image frame acquired by the camera during the photo preview or the video preview, and for convenience of description, a scene of the photo preview is mainly taken as an example for explanation below.

And 102, performing image processing on a first face image in the first image frame through the first image processing parameter, and performing image processing on a second face image in the second image frame through the second image processing parameter.

The first face image may be a face image of a first human subject, and the first human subject may be a human subject of a first gender. The second facial image may be a facial image of a second human subject, the second human subject may be a human subject of a second gender, the first gender and the second gender being different. If the first sex is female and the second sex is male, or the first sex is male and the second sex is female.

The first image processing parameter may be an image processing parameter for performing image processing on the first gender, and a specific numerical value of the first image processing parameter may be a most suitable numerical value for the first gender, for example, the first image processing parameter may be a female beauty parameter; the second image processing parameter may be an image processing parameter for performing image processing on the second gender, and a specific numerical value of the second image processing parameter may be a numerical value most suitable for the second gender, for example, the second image processing parameter may be a male beauty parameter. For example, when the first image processing parameter is a female beauty parameter, the first image processing parameter may include parameters of categories such as whitening intensity, contrast intensity, and peeling intensity, and may further include parameters of categories such as skin color parameter, skin smoothing parameter, face thinning parameter, eye size adjustment parameter, tooth whitening parameter, and blush parameter; when the second image processing parameter is a male beauty parameter, the second image processing parameter may further include a beard processing parameter on the basis of the female beauty parameter.

In step 102, after the first image frame and the second image frame acquired by the camera are acquired, a first facial image of a first human object in the first image frame may be subjected to image processing by using the first image processing parameter, and a second facial image of a second human object in the second image frame may be subjected to image processing by using the second image processing parameter. In other words, the image processing parameters corresponding to the genders of the human subjects may be used to perform image processing on the face images of human subjects of different genders in different image frames, respectively. For example, the first gender is female, the second gender is male, in the first image frame, the facial image of the female may be image processed using the image processing parameters corresponding to the female (e.g., the first image processing parameters), and in the second image frame, the facial image of the male may be image processed using the image processing parameters corresponding to the male (e.g., the second image processing parameters).

It is understood that the parameters and specific parameter values included in the first image processing parameter and the second image processing parameter can be set according to actual needs. The first image processing parameter and the second image processing parameter may include parameters having the same parameter type and parameter value, for example, a luminance value in the first image processing parameter may be the same as a luminance value in the second image processing parameter. The first image processing parameter and the second image processing parameter may include a unique parameter type, and the parameter type may be an image processing parameter of an image processing portion unique to a human subject of different sex, and for example, when the first image processing parameter is a female beauty parameter and the second image processing parameter is a male beauty parameter, the second image processing parameter may include a beard processing parameter which is not included in the female beauty parameter.

And 103, carrying out image synthesis on the first image frame and the second image frame after the image processing to obtain a target image frame.

After the image processing is performed on the first face image in the first image frame and the image processing is performed on the second face image in the second image frame, the first image frame after the image processing and the second image frame after the image processing may be combined to obtain a target image frame in which the image processing is performed on the face image of the female by using the image processing parameters of the female and the image processing is performed on the face image of the male by using the image processing parameters of the male.

As an example, in order to improve the effect of combining the first image frame after image processing and the second image frame after image processing, the first image frame after image processing and the second image frame after image processing are combined, and the second surface image in the first image frame may be replaced by the second surface image in the second image frame after image processing with the first image frame after image processing as a reference to obtain the target image frame.

It can be understood that the process of replacing the second face image in the first image frame with the second face image in the second image frame after image processing may be to extract the position of the second person object by using an existing image segmentation algorithm based on edge detection and then replace the second face image, for example, the second person object in the second image frame may be replaced with the second person object in the first image frame by using a sobel edge algorithm, which is not described herein again.

In the embodiment of the application, the image processing is performed on the face image of the first human object of the first gender in the first image frame through the first image processing parameter, the image processing is performed on the face image of the second human object of the second gender in the second image frame through the second image processing parameter, and the target image frame is obtained through synthesis based on the first image frame and the second image frame after the image processing. Therefore, the face images of the person objects with different genders in different image frames are subjected to image processing through different image processing parameters, so that the situation of distortion of the image frames can be effectively avoided, and thus, the situation of distortion can not occur on the basis of the target image frame obtained by two image frames without distortion after image processing, and the satisfaction degree of a user on the target image frame can be improved.

In some embodiments, the specific implementation manner of step 101 may be as follows:

under the condition that the performance parameters of the electronic equipment meet a first preset condition, a first image frame and a second image frame acquired by a camera are acquired.

The performance parameter of the electronic device may be a processor performance of the electronic device, and the processor performance may be used to measure whether the electronic device can capture a plurality of image frames, and combine different image frames into one image frame after image processing. The first preset condition is a preset processor performance strength, and when the processor performance of the electronic device is greater than the preset strength, the performance parameter of the electronic device may be considered to meet the first preset condition, that is, when the electronic device acquires the first image frame, other image frames besides the first image frame may also be captured, for example, adjacent image frames forward or backward to the first image frame may be captured.

Before the electronic device acquires the first image frame and the second image frame, it may be determined whether a performance parameter of the electronic device satisfies a first preset condition, and the second image frame adjacent to the first image frame may be acquired only when the performance parameter satisfies the first preset condition. In other words, a first image frame and a second image frame adjacent to the first image frame are acquired only if the processor performance of the electronic device is robust.

Therefore, when the performance of the electronic equipment is weak, the electronic equipment may not be capable of acquiring multiple image frames during photographing and previewing, or the facial images of the person objects with different genders in different image frames may not be respectively subjected to image processing through different image processing parameters, so that the first image frame and the second image frame acquired by the camera are acquired only under the condition that the performance parameters of the electronic equipment meet the first preset condition, and the condition that the facial images of the person objects with different genders in different image frames cannot be respectively subjected to image processing through different image processing parameters after the first image frame and the second image frame are acquired can be avoided, so that the success rate of image processing can be improved, the image processing efficiency can be improved, the quality of the target image frame can be further ensured, and the satisfaction degree of a user on the target image frame can be improved.

In some embodiments, the image processing method may further perform the steps of:

determining an image processing weight of each human object in a first image frame under the condition that the performance parameter of the electronic equipment does not meet a first preset condition;

calculating a target image processing parameter based on a preset image processing parameter and the image processing weight of each character object;

and performing image processing on the face image of each person object in the first image frame through the target image processing parameters to obtain a target image frame.

Wherein the image processing weight may be used to indicate a weight of the face image of the human subject as the image processing object, and the image processing weight of each human subject may be determined based on a preset parameter value of the face image of each human subject, and the preset parameter value may include at least one of coordinates of the face image of each human subject in the first image frame, an area of the face image of each human subject, and photographing angle data of the face image of each human subject.

As one example, after determining whether the performance parameter of the electronic device satisfies the first preset condition, if the performance parameter of the electronic device does not satisfy the first preset condition, the image processing weight of each human object in the first image frame may be determined. For example, at least one of the coordinates of the face image of each human subject in the first image frame, the area of the face image of each human subject, and the photographing angle data of the face image of each human subject in the first image frame may be acquired, and the image processing weight of each human subject in the first image frame may be determined according to these preset parameter values.

After determining the image processing weight of each human object in the first image frame, a target image processing parameter may be calculated based on the preset image processing parameter and the image processing weight of each human object.

Wherein the preset image processing parameter may be a first image processing parameter and a second image processing parameter. Based on the image processing weight of each human object, a total image processing weight of a first human object corresponding to a first gender and a total image processing weight of a second human object corresponding to a second gender in the first image frame may be determined.

As an example, in order to improve the image processing effect of the first image frame, the target image processing parameter is calculated based on the preset image processing parameter and the image processing weight of each human object, the total image processing weight of the first human object may be compared with the total image processing weight of the second human object, a gender corresponding to the human object with higher total image processing weight may be selected as the target gender, and the image processing parameter corresponding to the gender may be used as the target image processing parameter. For example, when the total image processing weight of the first human subject is greater than the total image processing weight of the second human subject, the first gender may be taken as the target gender, that is, the first image processing parameter may be taken as the target image processing parameter.

As another example, the target image processing parameter may also be calculated from the sum of products of the image processing weight of each human subject and a preset image processing parameter corresponding to the sex of the human subject.

After the target image processing parameters are obtained through calculation, the target image processing parameters may be used to perform image processing on the face image of each human object in the first image frame, so as to obtain a target image frame.

In the present embodiment, since the image processing cannot be performed on the face images of the human subjects corresponding to different genders in different image frames by using the image processing parameters of different genders in the case where the performance parameters of the electronic device do not satisfy the first preset condition, it is assumed that the image processing is performed on the face image of each human subject in the first image frame according to the first image processing parameter or the second image processing parameter set in advance, and in the case where two genders of human subjects are included in the first image frame as set in advance, the image processing is performed on the face image of each human subject in the first image frame by using the first image processing parameter, which may cause a situation where the face image of the second human subject corresponding to the second gender is distorted.

In this way, under the condition that the performance parameter of the electronic equipment does not meet the first preset condition, a more appropriate image processing parameter can be calculated according to the image processing weight of each human object in the first image frame and the preset image processing parameter. Therefore, the situation that image processing distortion occurs to the face image of the person object which does not correspond to the gender of the image processing parameter when the single first image processing parameter or the single second image processing parameter is adopted to carry out image processing on the face image of each person object in the first image frame can be effectively avoided, and the satisfaction degree of a user on the target image frame can be improved.

In some embodiments, the determining the image processing weight for each human object in the first image frame in the above steps may include at least one of:

calculating a distance weight of the face image of each human object based on coordinates of the face image of each human object in the first image frame;

calculating an area weight of the face image of each human subject based on the area of the face image of each human subject;

based on the shooting angle data of the face image of each human subject, a shooting angle weight of the face image of each human subject is calculated.

The distance weight may be a weight occupied by a distance from a center point of the face image of each human subject to a center point of the first image frame, and may be calculated based on coordinates of the center point of the face image of each human subject and coordinates of the center point of the first image frame. The area weight may be a weight occupied by an area of a region occupied by the face image of each human subject in the first image frame. The photographing angle weight may be a weight occupied by the photographing angle data of the face image of each human subject, and alternatively, the photographing angle data may be determined by a position of eyes in the face image of each human subject.

As shown in FIG. 2, the center point of the face image of the person's object may be represented by PiRepresentation in which i can be used to characterize each of the different human objects, such as P1May be the face image of the first person object, P2May be the face image … … P of the second person objectnMay be the face image of the nth personal object. The center point of the first image frame may be P0Indicating that the position of the face image of the human subject in the first image frame may be according to PiAnd P0The distance therebetween is determined. The length of the region occupied by the face image of the human subject may be represented by l, the height may be represented by h, and the area may be the product of l and h. As shown in fig. 3 to 5, the shooting angle of the face image of the person object may be a front face, a side face, or the like, and alternatively, the shooting angle data may be determined by the position of the eyes in the face image of each person object.

The smaller the distance from the center point of the face image of the human subject to the center point of the first image frame, the higher the image processing weight of the human subject in the first image frame, that is, the greater the specific gravity of the human subject as an image processing object. The larger the area of the face image of the human subject is, the higher the image processing weight of the human subject in the first image frame is, that is, the greater the weight of the human subject as an image processing object is. As the capturing angle data of the face image of the human subject is more deviated, it can be considered that the image processing weight of the human subject in the first image frame is lower, that is, the weight of the human subject as the image processing object is smaller, and as shown in fig. 3 to 5, the relationship of the image processing weight in the first image frame is: the face image in fig. 3 is larger than that in fig. 4, and the face image in fig. 4 is larger than that in fig. 5.

It is to be understood that the image processing weight of each human object in the first image frame may be determined by any one preset weight of a distance weight of each human object, an area weight of each human object, and a photographing angle weight of each human object, or may be determined by any two preset weights, or may be determined by any three preset weights.

As a specific example, the image processing weight of each human subject in the first image frame may be the sum of the distance weight, the area weight, and the photographing angle weight of each human subject.

As another specific example, in order to improve accuracy of the image processing weight of each human subject, the distance weight, the area weight, and the photographing angle weight of each human subject may correspond to different coefficients, and the image processing weight of each human subject in the first image frame may also be a sum of a product of the distance weight and the distance weight coefficient, a product of the area weight and the area weight coefficient, and a product of the photographing angle weight and the photographing angle weight coefficient of each human subject. The values of the distance weight coefficient, the area weight coefficient, and the shooting angle weight coefficient may be preset, and the specific values may be set according to actual conditions.

The calculation formula of the image processing weight for each human object may be as shown in formula (1).

Wi=α*W(D)i+β*W(S)i+γ*W(Ang)i (1)

Wherein, WiImage processing weight of i-th personal object, W (D)iDistance weight of the ith personal object, W (S)iDenotes the area weight of the ith personal object, W (ang)iThe image capturing apparatus includes a camera, a camera and a camera.

In this embodiment, the image processing weight of each human object in the first image frame may be determined according to at least one of a distance weight, an area weight, and a shooting angle weight, so that the determination of the image processing weight of each human object may refer to a plurality of different dimension weight values, the obtained image processing weight may be more accurate, and thus, the target image processing parameter calculated based on the image processing weight of each human object in the first image frame may be more accurate, so that the satisfaction of the user on the target image frame may be improved.

In some embodiments, the specific implementation manner of calculating the distance weight of the face image of each human subject based on the coordinates of the face image of each human subject in the first image frame in the above steps may be as follows:

calculating a first distance between the coordinates and center coordinates of the first image frame for coordinates of the face image of each human subject in the first image frame;

calculating a first difference value between the preset distance and the first distance;

calculating the ratio of the first difference value to a preset distance to obtain the distance weight of each character object;

the specific implementation manner of calculating the area weight of the face image of each human subject based on the area of the face image of each human subject in the above steps may be as follows:

calculating the ratio of the area of the face image of each character object to the area of the first image frame to obtain the area weight of each character object;

a specific implementation manner of calculating the shooting angle weight of the face image of each human subject based on the shooting angle data of the face image of each human subject in the above steps may be as follows:

calculating a second distance between the coordinates of the left eye and the coordinates of the centers of both eyes and a third distance between the coordinates of the right eye and the coordinates of the centers of both eyes in the face image of each human subject based on the coordinates of the eyes in the photographing angle data of each human subject;

calculating a maximum value of the second distance and the third distance and a second difference value of the second distance and the third distance;

and calculating a third difference value of the maximum value and the second difference value and a ratio of the third difference value to the maximum value to obtain the weight of the shooting angle.

As an example, in order to calculate the distance weight, the distance weight may be determined by first calculating a first distance between coordinates of the face image of each human object in the first image frame and center coordinates of the first image frame, and then calculating a first difference between the preset distance and the first distance and a ratio of the first difference to the preset distance to obtain the distance weight of each human object.

The preset distance may be a maximum distance between a center point of the face image of the human subject in the first image frame and a center point of the first image frame, that is, the preset distance may be one half of the length of the first image frame.

As shown in fig. 2, the coordinates of the face image of each human subject in the first image frame may pass through the center point P of the face imageiTo indicate that the center coordinates of the first image frame may pass through the center point P of the first image frame0To indicate. Calculating the first distance may first obtain a center point P of the face imageiAnd a center point P of the first image frame0And coordinate values in a preset coordinate system, wherein the preset coordinate system may be a plane coordinate system established by taking any point in the first image frame as a coordinate origin in the plane of the first image frame.

As a specific example, the coordinate value of the center point of the face image may be Pi(xi,yi) The coordinate value of the center point of the first image frame may be P0(x0,y0) The calculation formula of the first distance may be as shown in formula (2).

Wherein D isiA first distance, x, between the position of the face image representing the ith personal object in the first image frame and the center position of the first image frameiAbscissa value, x, representing the center point of the face image of the ith personal object0In representing a first image frameAbscissa value of center point, yiOrdinate value, y, representing the center point of the face image of the ith personal object0And the ordinate value represents the center point of the first image frame, wherein 1 is less than or equal to i and less than or equal to n, and n represents that n personal object objects are included in the first image frame.

After calculating the first distance between the coordinates of the face image of each human object in the first image frame and the center coordinates of the first image frame, a first difference between the preset distance and the first distance and a ratio of the first difference to the preset distance may be calculated to obtain the distance weight of each human object. In the present embodiment, it may be considered that the distance from the center point of the face image to the center point of the first image frame is inversely proportional to the distance weight of the human object, and the calculation formula of the distance weight of each human object may be as shown in formula (3).

Wherein, W (D)iDistance weight, D, representing the ith personal objectiAnd D represents a preset distance, wherein i is more than or equal to 1 and less than or equal to n, and n represents that n personal object objects are included in the first image frame.

As an example, in order to calculate the area weight, the area weight determining step may calculate a ratio of the area of the face image of each human subject to the area of the first image frame, and obtain the area weight of each human subject.

The first image frame area may refer to the total number of pixels in the first image frame, and the face image area of each human subject may refer to the number of pixels in the face image area.

As a specific example, it is also possible to calculate a ratio of the arithmetic square root of the area of the face image of each human subject to the arithmetic square root of the area of the first image frame to obtain the area weight of each human subject. It can be considered that the arithmetic square root of the area of the face image is in direct proportion to the area weight of the person object, and the calculation formula of the area weight of each person object can be shown in formula (4).

Wherein, W (S)iRepresents the area weight, S, of the ith personal objectiDenotes an area of a face image of the ith personal object, S denotes a first image frame area, where 1 ≦ i ≦ n, n denotes that n personal objects are included in the first image frame.

As an example, in order to calculate the photographing weight, the photographing angle weight may be determined by calculating a second distance between the left-eye coordinate and the both-eye center coordinate and a third distance between the right-eye coordinate and the both-eye center coordinate in the face image of each human subject based on the coordinates of the eyes in the photographing angle data of each human subject, calculating a maximum value of the second distance and the third distance and a second difference value of the second distance and the third distance, and calculating a third difference value of the maximum value and the second difference value and a ratio of the third difference value to the maximum value to obtain the photographing angle weight.

Wherein the center coordinates of the two eyes of each human object can pass through the center position point OiTo indicate that the left eye coordinates may pass through the left eye location point LiCan be expressed by the right eye coordinate can be expressed by the right eye position point RiTo indicate. Calculating a second distance between the left-eye coordinate and the center coordinates of both eyes and a third distance between the right-eye coordinate and the center coordinates of both eyes, and obtaining a center position point OiLeft eye location point LiAnd right eye position point RiCoordinate values in a preset coordinate system.

As a specific example, the coordinate value of the center position point may be Oi(xi1,yi1) The coordinate value of the left-eye position point may be Li(xi2,yi2) The coordinate value of the right eye position point can be Ri(xi3,yi3) The calculation formula of the second distance and the third distance may be as the formula(5) As shown.

Wherein LOiIndicating a second distance, RO, between the center positions of the left and both eyes of the ith personal objectiA third distance, x, between the center positions of the right eye and both eyes of the ith personal objecti1 represents the abscissa value, x, of the central point of the ith personal objecti2 denotes an abscissa value, x, of the left-eye position point of the ith personal objecti3 an abscissa value, y, of a right-eye position point of the ith personal objecti1 is an ordinate value, y, indicating the center position point of the ith personal objecti2 ordinate value, y, of the left-eye position point of the ith personal objectiAnd 3 represents the ordinate value of the right eye position point of the ith personal object, wherein 1 is less than or equal to i is less than or equal to n, and n represents that n personal objects are included in the first image frame.

It is to be understood that, when the face image is in the front in the first image frame as shown in fig. 3, the second distance may be considered to be generally equal to the third distance. As shown in fig. 4 and 5, when the face image is shifted to the left in the first image frame, the third distance may be considered to be greater than the second distance. When the face image appears as a side face in the first image frame, that is, the left-eye position point is not visible, and the center position point and the right-eye position point are visible, it can be considered that the left-eye position point coincides with the center position point, and the second distance is 0. When the center position point is not visible in the first image frame of the face image, the distance from the invisible left-eye position point or right-eye position point to the center position point may be considered to be 0, and the distance from the corresponding other visible left-eye position point or right-eye position point to the center position point may be considered to be 1, for example, when only the right-eye position point is visible in the first image frame of the face image, the second distance may be considered to be 0, and the third distance may be considered to be 1.

After calculating the second distance between the left-eye coordinate and the both-eye center coordinate and the third distance between the right-eye coordinate and the both-eye center coordinate in the face image of each human subject, the maximum value of the second distance and the third distance may be determined, and then the second difference value between the second distance and the third distance may be calculated, where the second difference value may be an absolute value of the difference value between the second distance and the third distance, and then the third difference value between the maximum value and the second difference value may be calculated, and then the ratio of the third difference value to the maximum value may be calculated, so as to obtain the shooting angle weight.

The calculation formula of the photographing angle weight of each human subject may be as shown in formula (6).

Wherein, W (ang)iRepresenting the photographing angle weight, LO, of the ith personal objectiIndicating a second distance, RO, between the center positions of the left and both eyes of the ith personal objectiAnd a third distance between the right eye and the center positions of the two eyes of the ith personal object is represented, wherein i is more than or equal to 1 and less than or equal to n, and n represents that the first image frame comprises n personal objects.

As a specific example, as can be understood from formula (3), formula (4), and formula (6), a weight value of 1 is easily reached for the distance weight of one human subject, but the area weight of one human subject is difficult to reach 1 compared to the distance weight because the area of the face image is difficult to be equal to the area of the first image frame, and α in formula (1) may be set to 0.3, β may be set to 0.4, and γ may be set to 0.3 in order to balance the contribution degrees of the distance weight, the area weight, and the photographing angle weight to the calculation of the image processing weight of the human subject.

In this embodiment, the distance weight, the area weight, and the shooting angle weight of each person object may be calculated, and the image processing weight of each person object may be determined according to the calculated distance weight, area weight, and shooting angle weight, so that the determination of the image processing weight of each person object may refer to a plurality of different dimension weight values, and the obtained image processing weight may be more accurate, and thus, the target image processing parameter calculated based on the image processing weight of each person object in the first image frame may be more accurate, and thus, the satisfaction of the user on the target image frame may be improved.

In some embodiments, before the image processing is performed on the face image of each human object in the first image frame by the target image processing parameter in the above step to obtain the target image frame, the following steps may be further performed:

determining a target face image of a target human object satisfying a second preset condition in the first image frame;

wherein the second preset condition comprises at least one of the following: the area ratio of the face image of the person object in the first image frame is larger than a first preset threshold value, and the distance between the face image of the person object and the edge of the first image frame is larger than a second preset threshold value;

correspondingly, the specific implementation manner of performing image processing on the face image of each human object in the first image frame through the target image processing parameter in the above step to obtain the target image frame may be as follows:

and carrying out image processing on the target face image of the target person object through the target image processing parameters to obtain a target image frame.

When the area ratio of the face image of the human subject in the first image frame is smaller than or equal to the first preset threshold, that is, the area ratio of the face image of the human subject in the first image frame is smaller, it can be considered that the human subject is mistakenly inserted into the first image frame. When the distance between the face image of the human object and the edge of the first image frame is less than or equal to a second preset threshold value, namely the face image of the human object is at the edge position of the first image frame, the human object can be considered to be mistakenly inserted into the first image frame. In other words, when the area ratio of the face image of the human subject in the first image frame is greater than a first preset threshold, or the distance from the edge of the first image frame is greater than a second preset threshold, or both of the above conditions are satisfied, it may be considered that the human subject is determined as the target human subject and the face image is determined as the target face image.

As a specific example, the greater the distance between the face image of a human subject and the edge of the first image frame, the corresponding first distance D of the human subjectiThe smaller, D is according to the formula (3)iThe smaller, W (D)iThe larger the area of the face image of the human subject in the first image frame, the larger the area occupied by W (S)iThe larger, therefore, the target person object may satisfy the condition as shown in equation (7).

Wherein, W (D)iDistance weight of the ith personal object, W (S)iRepresenting the area weight, T, of the ith personal object1Represents a distance weight threshold, T2Representing an area weight threshold.

It is to be understood that, when the distance weight of the human subject is greater than the distance weight threshold, the center point of the face image of the human subject may be considered to be closer to the center point of the first image frame, that is, the face image of the human subject is not located at the edge position of the first image frame, in other words, it may be determined that the human subject is not mistakenly inserted into the first image frame, and may be determined as the target human subject. When the area weight of the human subject is larger than the area weight threshold, it may be determined that the area ratio of the face image of the human subject in the first image frame is large, in other words, it may be determined that the human subject is not mistakenly inserted into the first image frame and may be determined as the target human subject.

When the face image of the person object can satisfy the formula (7), the target face image to be subjected to image processing can be accurately extracted, so that the mistakenly-entered face image is eliminated.

After determining the target face image of the target human subject in the first image frame, the target face image may be image processed by the target image processing parameters to obtain a target image frame.

When the area occupied by the face image in the first image frame is small and the first image frame is in the edge area of the first image frame, the person object can be considered to be wrongly inserted into the first image frame, and at the moment, when the target image processing parameters are adopted for image processing, the wrongly inserted person object can be removed, and only the target face image is subjected to image processing to obtain the target image frame.

Therefore, the face image which is mistakenly entered in the first image frame can be prevented from being subjected to image processing, only the target face image of the target person object meeting the second preset condition is subjected to image processing, the number of the target images subjected to image processing in the first image frame is reduced, the risk of image processing distortion of the first image frame is reduced, and the satisfaction degree of a user on the target image frame can be improved.

As a specific example, in order to improve the image processing effect of the image frames, after determining the target face image of the target human object in the first image frame, the image processing weight of each target human object in the first image frame may be further determined in a case where the performance parameter of the electronic device does not satisfy the first preset condition, and then the target image processing parameter may be calculated based on the preset image processing parameter and the image processing weight of each target human object, and then the target face image of the target human object in the first image frame may be subjected to image processing using the target image processing parameter, so as to obtain the target image frame.

Therefore, the facial images of the person objects displayed in the first image frame by mistake can be effectively removed, when the target image processing parameters are calculated, only the image processing weight of the target facial image meeting the second preset condition can be considered, when the target image processing parameters are calculated subsequently, because the facial images of the person objects displayed in the first image frame by mistake are removed, namely the error data are removed, the accuracy of the data participating in calculating the target image processing parameters is ensured, therefore, the errors of the target image processing parameters are effectively reduced, the accuracy of the target image processing parameters is ensured, and the satisfaction degree of users to the target image frame can be improved.

In some embodiments, the preset image processing parameters may include a first image processing parameter and a second image processing parameter; correspondingly, the specific implementation manner of calculating the target image processing parameter based on the preset image processing parameter and the image processing weight of each person object in the above steps may be as follows:

calculating the sum of image processing weights of a first human object in a first image frame to obtain a first total weight;

calculating the sum of the image processing weights of the second person object in the first image frame to obtain a second total weight;

calculating an adjustment coefficient based on the first total weight and the second total weight;

calculating a first product of the adjustment coefficient and the first image processing sub-parameter to obtain a first target image processing parameter; the first image processing sub-parameter is a parameter for processing images of first type facial features of a first gender and a second gender, the first type facial features are the same facial features of the first gender and the second gender, and image processing parameter values of the first type facial features are different;

calculating a second product of the adjustment coefficient and the second image processing sub-parameter to obtain a second target image processing parameter; the second image processing sub-parameter is a parameter for processing a second type of facial features of the first gender and the second gender, and the second type of facial features are different facial features of the first gender and the second gender;

determining a target image processing parameter based on the first target image processing parameter, the second target image processing parameter and the third image processing sub-parameter; the third image processing sub-parameter is a parameter for performing image processing on a third type of facial features of the first gender and the second gender, the third type of facial features are the same facial features of the first gender and the second gender, and image processing parameter values of the third type of facial features are the same.

The calculating of the target image processing parameter may be to calculate a sum of image processing weights of the first person object in the first image frame to obtain a first total weight, and calculate a sum of image processing weights of the second person object in the first image frame to obtain a second total weight. In other words, the sum of the image processing weights for all males and the sum of the image processing weights for all females in the first image frame may be calculated, respectively.

As an example, for example, the first total weight is a sum of image processing weights for males, and the second total weight is a sum of image processing weights for females, and a calculation formula of the first total weight and the second total weight may be as shown in formula (8).

Wherein W (m) represents the sum of image processing weights for males, W (w) represents the sum of image processing weights for females, wherein j is greater than or equal to 1 and less than or equal to p, k is greater than or equal to 1 and less than or equal to q, p represents that p males are included in the first image frame, and q represents that q females are included in the first image frame.

After the first total weight and the second total weight are calculated, the adjustment coefficient may be calculated based on the first total weight and the second total weight. It is understood that the adjustment coefficient may be obtained by calculating an absolute value of a difference between the first total weight and the second total weight, and then calculating a ratio of the absolute value to the first total weight or the second total weight. The calculation formula of the adjustment coefficient may be as shown in formula (9).

W (m) represents the sum of image processing weights for males, w (w) represents the sum of image processing weights for females, a and b may both represent adjustment coefficients, a may be a first adjustment coefficient corresponding to the gender of a female, and b may be a second adjustment coefficient corresponding to the gender of a male.

As an example, in order to accurately obtain the target image processing parameter, after calculating the adjustment coefficient, a first target image processing parameter may be obtained according to a first product of the adjustment coefficient and the first image processing sub-parameter, a second product of the adjustment coefficient and the second image processing sub-parameter may be calculated to obtain a second target image processing parameter, and then the target image processing parameter may be determined based on the first target image processing parameter, the second target image processing parameter, and the third image processing sub-parameter.

The first image processing sub-parameter may be a parameter for performing image processing on first type facial features of the first gender and the second gender, the first type facial features may be the same facial features of the first gender and the second gender, and image processing parameter values of the first type facial features are different, for example, the first image processing sub-parameter may include a whitening intensity, a contrast intensity, and the like. The second image processing sub-parameter may be a parameter for performing image processing on a second type of facial features of the first gender and the second gender, and the second type of facial features may be different facial features of the first gender and the second gender, for example, the second image processing sub-parameter may be beard blur smoothing and the like. The third image processing sub-parameter may be a parameter for performing image processing on a third type of facial features of the first gender and the second gender, the third type of facial features may be the same facial features of the first gender and the second gender, and image processing parameter values of the third type of facial features are the same, for example, the third image processing sub-parameter may be brightness and the like.

The target image processing parameter may be a sum of the first target image processing parameter, the second target image processing parameter, and the third image processing sub-parameter, or may be a sum of the first target image processing parameter, the second target image processing parameter, and the third image processing sub-parameter, which are distributed according to corresponding weight values and then added.

As a specific example, in order to obtain the target image processing parameter more accurately, the first image processing sub-parameter may be a difference parameter of the female image processing parameter compared with the male image processing parameter, the second image processing sub-parameter may be a difference parameter of the male image processing parameter compared with the female image processing parameter, and the third image processing sub-parameter may be the same image processing parameter for both the male and the female.

The product of the first image processing sub-parameter and the first adjustment coefficient a may be calculated, and then the product of the second image processing sub-parameter and the first adjustment coefficient b may be calculated, and the target image processing parameter may be the product of the first image processing sub-parameter and the first adjustment coefficient a, the product of the second image processing sub-parameter and the first adjustment coefficient b, and the sum of the third image processing sub-parameter.

Therefore, the adjustment coefficient can be calculated according to the image processing weight total ratio of the person object corresponding to different genders in the first image frame, so that the image processing parameters are adjusted according to the adjustment coefficient and the difference image processing parameters in the image processing parameters of different genders, and more accurate target image processing parameters are obtained. Therefore, when the face images of the person objects with different genders are subjected to image processing, the situation that the image processing distortion occurs in the image frame can be effectively avoided, and the satisfaction degree of a user on the target image frame can be improved.

In order to facilitate understanding of the image processing method provided by the above embodiment, the following describes the above image processing method with a specific scene embodiment. Fig. 6 is a schematic flowchart of a scene embodiment of an image processing method according to an embodiment of the present application.

The application scenario of the scenario embodiment may be as follows: the electronic device acquires a first image frame and performs image processing on a face image of a human object in the first image frame. The scenario embodiment may specifically include the following steps:

in step 601, the electronic device starts an image processing program.

Step 602, determining whether the performance parameter of the electronic device meets a first preset condition, if yes, entering step 604; if not, go to step 606.

Step 603, the electronic device acquires a first image frame and a second image frame.

Step 604, image processing is performed on a first facial image in the first image frame by the first image processing parameter, and image processing is performed on a second facial image in the second image frame by the second image processing parameter.

Step 605, the first image frame after image processing and the second image frame after image processing are combined to obtain a target image frame.

At step 606, an image processing weight for each human object in the first image frame is determined.

Step 607, calculating a first total weight of the first human object and a second total weight of the second human object, calculating an adjustment coefficient according to the first total weight and the second total weight, determining a target image processing parameter according to the adjustment coefficient and a preset image processing parameter, and performing image processing on the facial image of each human object in the first image frame by using the target image processing parameter to obtain a target image frame.

In step 608, the electronic device ends the image processing routine.

In this scenario embodiment, on one hand, under the condition that the performance parameter of the electronic device satisfies the first preset condition, the image processing parameters of different genders may be adopted to perform image processing on the face images of the person objects corresponding to different genders in different image frames, so as to effectively avoid image processing distortion of the image frames, and thus improve the satisfaction of the user on the target image frame. On the other hand, under the condition that the performance parameter does not meet the first preset condition, a more suitable image processing parameter can be calculated according to the image processing weight total proportion of the human objects corresponding to different sexes in the first image frame. Therefore, the situation that image processing distortion occurs to the face image of the person object which does not correspond to the gender of the image processing parameter when the single first image processing parameter or the single second image processing parameter is adopted to carry out image processing on the face image of each person object in the first image frame can be effectively avoided, and the satisfaction degree of a user on the target image frame can be improved.

It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the loaded image processing method. In the embodiment of the present application, an image processing apparatus executes a loaded image processing method as an example, and the image processing method provided in the embodiment of the present application is described.

Fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. The image processing apparatus 700 may include:

an obtaining module 701, configured to obtain a first image frame and a second image frame acquired by a camera;

a first image processing module 702, configured to perform image processing on a first facial image in a first image frame according to a first image processing parameter, and perform image processing on a second facial image in a second image frame according to a second image processing parameter;

a synthesizing module 703, configured to perform image synthesis on the first image frame and the second image frame after the image processing to obtain a target image frame;

the first face image is a face image of a first person object, the first person object is of a first gender, the second face image is a face image of a second person object, the second person object is of a second gender, and the first gender and the second gender are different.

In the embodiment of the application, the image processing is performed on the face image of the first human object of the first gender in the first image frame through the first image processing parameter, the image processing is performed on the face image of the second human object of the second gender in the second image frame through the second image processing parameter, and the target image frame is obtained through synthesis based on the first image frame and the second image frame after the image processing. Therefore, the face images of the person objects with different genders in different image frames are subjected to image processing through different image processing parameters, so that the situation of distortion of the image frames can be effectively avoided, and thus, the situation of distortion can not occur on the basis of the target image frame obtained by two image frames without distortion after image processing, and the satisfaction degree of a user on the target image frame can be improved.

In some embodiments, the obtaining module 701 may be specifically configured to:

under the condition that the performance parameters of the electronic equipment meet a first preset condition, a first image frame and a second image frame acquired by a camera are acquired.

Therefore, when the performance of the electronic equipment is weak, the electronic equipment may not be capable of acquiring multiple image frames during photographing and previewing, or the facial images of the person objects with different genders in different image frames may not be respectively subjected to image processing through different image processing parameters, so that the first image frame and the second image frame acquired by the camera are acquired only under the condition that the performance parameters of the electronic equipment meet the first preset condition, and the condition that the facial images of the person objects with different genders in different image frames cannot be respectively subjected to image processing through different image processing parameters after the first image frame and the second image frame are acquired can be avoided, so that the success rate of image processing can be improved, the image processing efficiency can be improved, the quality of the target image frame can be further ensured, and the satisfaction degree of a user on the target image frame can be improved.

In some embodiments, the image processing apparatus 700 may further include:

a first determination module, configured to determine an image processing weight of each human object in the first image frame if the performance parameter of the electronic device does not satisfy a first preset condition, the image processing weight indicating a weight of a face image of the human object as an image processing object;

the calculation module is used for calculating target image processing parameters based on preset image processing parameters and the image processing weight of each character object;

and the second image processing module is used for carrying out image processing on the face image of each person object in the first image frame through the target image processing parameters to obtain a target image frame.

In this way, under the condition that the performance parameter of the electronic equipment does not meet the first preset condition, a more appropriate image processing parameter can be calculated according to the image processing weight of each human object in the first image frame and the preset image processing parameter. Therefore, the situation that image processing distortion occurs to the face image of the person object which does not correspond to the gender of the image processing parameter when the single first image processing parameter or the single second image processing parameter is adopted to carry out image processing on the face image of each person object in the first image frame can be effectively avoided, and the satisfaction degree of a user on the target image frame can be improved.

In some embodiments, the first determining module may include:

a first calculation unit configured to calculate a distance weight of the face image of each human subject based on coordinates of the face image of each human subject in the first image frame;

a second calculation unit configured to calculate an area weight of the face image of each human subject based on the area of the face image of each human subject;

a third calculation unit configured to calculate a photographing angle weight of the face image of each human subject based on photographing angle data of the face image of each human subject.

In this embodiment, the image processing weight of each human object in the first image frame may be determined according to at least one of a distance weight, an area weight, and a shooting angle weight, so that the determination of the image processing weight of each human object may refer to a plurality of different dimension weight values, the obtained image processing weight may be more accurate, and thus, the target image processing parameter calculated based on the image processing weight of each human object in the first image frame may be more accurate, so that the satisfaction of the user on the target image frame may be improved.

In some embodiments, the first calculation unit may include:

a first calculation subunit configured to calculate, for coordinates of the face image of each human subject in the first image frame, a first distance between the coordinates and center coordinates of the first image frame;

the second calculating subunit is used for calculating a first difference value between the preset distance and the first distance;

the third calculation subunit is used for calculating the ratio of the first difference value to the preset distance to obtain the distance weight of each character object;

accordingly, the second calculation unit may include:

the fourth calculating subunit is used for calculating the ratio of the area of the face image of each person object to the area of the first image frame to obtain the area weight of each person object;

accordingly, the third calculation unit may include:

a fifth calculating subunit configured to calculate, based on coordinates of eyes in the photographing angle data of each human subject, a second distance between the coordinates of the left eye and the coordinates of the centers of both eyes and a third distance between the coordinates of the right eye and the coordinates of the centers of both eyes in the face image of each human subject;

a sixth calculating subunit operable to calculate a maximum value of the second distance and the third distance, and a second difference value of the second distance and the third distance;

and the seventh calculating subunit is used for calculating a third difference value of the maximum value and the second difference value and a ratio of the third difference value to the maximum value to obtain the weight of the shooting angle.

In this embodiment, the distance weight, the area weight, and the shooting angle weight of each person object may be calculated, and the image processing weight of each person object may be determined according to the calculated distance weight, area weight, and shooting angle weight, so that the determination of the image processing weight of each person object may refer to a plurality of different dimension weight values, and the obtained image processing weight may be more accurate, and thus, the target image processing parameter calculated based on the image processing weight of each person object in the first image frame may be more accurate, and thus, the satisfaction of the user on the target image frame may be improved.

In some embodiments, the image processing apparatus 700 may further include:

a second determination module for determining a target face image of the target human subject satisfying a second preset condition in the first image frame;

wherein the second preset condition comprises at least one of the following: the area ratio of the face image of the person object in the first image frame is larger than a first preset threshold value, and the distance between the face image of the person object and the edge of the first image frame is larger than a second preset threshold value;

the second image processing module may be specifically configured to:

and carrying out image processing on the target face image of the target person object through the target image processing parameters to obtain a target image frame.

Therefore, the face image which is mistakenly entered in the first image frame can be prevented from being subjected to image processing, only the target face image of the target person object meeting the second preset condition is subjected to image processing, the number of the target images subjected to image processing in the first image frame is reduced, the risk of image processing distortion of the first image frame is reduced, and the satisfaction degree of a user on the target image frame can be improved.

In some embodiments, the preset image processing parameters may include a first image processing parameter and a second image processing parameter; the calculation module may include:

the fourth calculating unit is used for calculating the sum of the image processing weights of the first human object in the first image frame to obtain a first total weight;

the fifth calculating unit is used for calculating the sum of the image processing weights of the second person object in the first image frame to obtain a second total weight;

a sixth calculation unit configured to calculate an adjustment coefficient based on the first total weight and the second total weight;

the seventh calculating unit is used for calculating a first product of the adjusting coefficient and the first image processing sub-parameter to obtain a first target image processing parameter; the first image processing sub-parameter is a parameter for processing images of first type facial features of a first gender and a second gender, the first type facial features are the same facial features of the first gender and the second gender, and image processing parameter values of the first type facial features are different;

the eighth calculating unit is used for calculating a second product of the adjusting coefficient and the second image processing sub-parameter to obtain a second target image processing parameter; the second image processing sub-parameter is a parameter for processing a second type of facial features of the first gender and the second gender, and the second type of facial features are different facial features of the first gender and the second gender;

a ninth calculation unit for determining a target image processing parameter based on the first target image processing parameter, the second target image processing parameter, and the third image processing sub-parameter; the third image processing sub-parameter is a parameter for performing image processing on a third type of facial features of the first gender and the second gender, the third type of facial features are the same facial features of the first gender and the second gender, and image processing parameter values of the third type of facial features are the same.

Therefore, the adjustment coefficient can be calculated according to the image processing weight total ratio of the person object corresponding to different genders in the first image frame, so that the image processing parameters are adjusted according to the adjustment coefficient and the difference image processing parameters in the image processing parameters of different genders, and more accurate target image processing parameters are obtained. Therefore, when the face images of the person objects with different genders are subjected to image processing, the situation that the image processing distortion occurs in the image frame can be effectively avoided, and the satisfaction degree of a user on the target image frame can be improved.

The image processing apparatus 700 in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.

The image processing apparatus 700 in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.

The image processing apparatus 700 provided in the embodiment of the present application can implement each process in the method embodiments of fig. 1 to fig. 6, and is not described herein again to avoid repetition.

Optionally, an electronic device is further provided in this embodiment of the present application, and includes a processor 810, a memory 809, and a program or an instruction stored in the memory 809 and executable on the processor 810, where the program or the instruction is executed by the processor 810 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.

It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.

Fig. 8 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.

The electronic device 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, and a processor 810.

Those skilled in the art will appreciate that the electronic device 800 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 810 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. Drawing (A)xThe electronic device structures shown in the figures do not constitute limitations of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.

Wherein, the processor 810 may be configured to:

acquiring a first image frame and a second image frame acquired by a camera;

performing image processing on a first face image in the first image frame through the first image processing parameter, and performing image processing on a second face image in the second image frame through the second image processing parameter;

carrying out image synthesis on the first image frame and the second image frame after image processing to obtain a target image frame;

the first face image is a face image of a first person object, the first person object is of a first gender, the second face image is a face image of a second person object, the second person object is of a second gender, and the first gender and the second gender are different.

In the embodiment of the application, the image processing is performed on the face image of the first human object of the first gender in the first image frame through the first image processing parameter, the image processing is performed on the face image of the second human object of the second gender in the second image frame through the second image processing parameter, and the target image frame is obtained through synthesis based on the first image frame and the second image frame after the image processing. Therefore, the face images of the person objects with different genders in different image frames are subjected to image processing through different image processing parameters, so that the situation of distortion of the image frames can be effectively avoided, and thus, the situation of distortion can not occur on the basis of the target image frame obtained by two image frames without distortion after image processing, and the satisfaction degree of a user on the target image frame can be improved.

In some embodiments, the processor 810 may be further configured to:

under the condition that the performance parameters of the electronic equipment meet a first preset condition, a first image frame and a second image frame acquired by a camera are acquired.

Therefore, when the performance of the electronic equipment is weak, the electronic equipment may not be capable of acquiring multiple image frames during photographing and previewing, or the facial images of the person objects with different genders in different image frames may not be respectively subjected to image processing through different image processing parameters, so that the first image frame and the second image frame acquired by the camera are acquired only under the condition that the performance parameters of the electronic equipment meet the first preset condition, and the condition that the facial images of the person objects with different genders in different image frames cannot be respectively subjected to image processing through different image processing parameters after the first image frame and the second image frame are acquired can be avoided, so that the success rate of image processing can be improved, the image processing efficiency can be improved, the quality of the target image frame can be further ensured, and the satisfaction degree of a user on the target image frame can be improved.

In some embodiments, the processor 810 may be further configured to:

determining an image processing weight of each human object in the first image frame in the case that the performance parameter of the electronic device does not satisfy a first preset condition, wherein the image processing weight is used for indicating a facial image of the human object as the proportion of the image processing object;

calculating a target image processing parameter based on a preset image processing parameter and the image processing weight of each character object;

and performing image processing on the face image of each person object in the first image frame through the target image processing parameters to obtain a target image frame.

In this way, under the condition that the performance parameter of the electronic equipment does not meet the first preset condition, a more appropriate image processing parameter can be calculated according to the image processing weight of each human object in the first image frame and the preset image processing parameter. Therefore, the situation that image processing distortion occurs to the face image of the person object which does not correspond to the gender of the image processing parameter when the single first image processing parameter or the single second image processing parameter is adopted to carry out image processing on the face image of each person object in the first image frame can be effectively avoided, and the satisfaction degree of a user on the target image frame can be improved.

In some embodiments, the processor 810 may be further configured to:

calculating a distance weight of the face image of each human object based on coordinates of the face image of each human object in the first image frame;

calculating an area weight of the face image of each human subject based on the area of the face image of each human subject;

based on the shooting angle data of the face image of each human subject, a shooting angle weight of the face image of each human subject is calculated.

In this embodiment, the image processing weight of each human object in the first image frame may be determined according to at least one of a distance weight, an area weight, and a shooting angle weight, so that the determination of the image processing weight of each human object may refer to a plurality of different dimension weight values, the obtained image processing weight may be more accurate, and thus, the target image processing parameter calculated based on the image processing weight of each human object in the first image frame may be more accurate, so that the satisfaction of the user on the target image frame may be improved.

In some embodiments, the processor 810 may be further configured to:

calculating a first distance between the coordinates and center coordinates of the first image frame for coordinates of the face image of each human subject in the first image frame;

calculating a first difference value between the preset distance and the first distance;

calculating the ratio of the first difference value to a preset distance to obtain the distance weight of each character object;

calculating the ratio of the area of the face image of each character object to the area of the first image frame to obtain the area weight of each character object;

calculating a second distance between the coordinates of the left eye and the coordinates of the centers of both eyes and a third distance between the coordinates of the right eye and the coordinates of the centers of both eyes in the face image of each human subject based on the coordinates of the eyes in the photographing angle data of each human subject;

calculating a maximum value of the second distance and the third distance and a second difference value of the second distance and the third distance;

and calculating a third difference value of the maximum value and the second difference value and a ratio of the third difference value to the maximum value to obtain the weight of the shooting angle.

In this embodiment, the distance weight, the area weight, and the shooting angle weight of each person object may be calculated, and the image processing weight of each person object may be determined according to the calculated distance weight, area weight, and shooting angle weight, so that the determination of the image processing weight of each person object may refer to a plurality of different dimension weight values, and the obtained image processing weight may be more accurate, and thus, the target image processing parameter calculated based on the image processing weight of each person object in the first image frame may be more accurate, and thus, the satisfaction of the user on the target image frame may be improved.

In some embodiments, the processor 810 may be further configured to:

determining a target face image of a target human object satisfying a second preset condition in the first image frame;

performing image processing on a target face image of the target human object by using the target image processing parameters to obtain a target image frame,

wherein the second preset condition comprises at least one of the following: the area ratio of the face image of the human object in the first image frame is larger than a first preset threshold value, and the distance between the face image of the human object and the edge of the first image frame is larger than a second preset threshold value.

Therefore, the face image which is mistakenly entered in the first image frame can be prevented from being subjected to image processing, only the target face image of the target person object meeting the second preset condition is subjected to image processing, the number of the target images subjected to image processing in the first image frame is reduced, the risk of image processing distortion of the first image frame is reduced, and the satisfaction degree of a user on the target image frame can be improved.

In some embodiments, the preset image processing parameters may include a first image processing parameter and a second image processing parameter; processor 810 is further operable to:

calculating the sum of image processing weights of a first human object in a first image frame to obtain a first total weight;

calculating the sum of the image processing weights of the second person object in the first image frame to obtain a second total weight;

calculating an adjustment coefficient based on the first total weight and the second total weight;

calculating a first product of the adjustment coefficient and the first image processing sub-parameter to obtain a first target image processing parameter; the first image processing sub-parameter is a parameter for processing images of first type facial features of a first gender and a second gender, the first type facial features are the same facial features of the first gender and the second gender, and image processing parameter values of the first type facial features are different;

calculating a second product of the adjustment coefficient and the second image processing sub-parameter to obtain a second target image processing parameter; the second image processing sub-parameter is a parameter for processing a second type of facial features of the first gender and the second gender, and the second type of facial features are different facial features of the first gender and the second gender;

determining a target image processing parameter based on the first target image processing parameter, the second target image processing parameter and the third image processing sub-parameter; the third image processing sub-parameter is a parameter for performing image processing on a third type of facial features of the first gender and the second gender, the third type of facial features are the same facial features of the first gender and the second gender, and image processing parameter values of the third type of facial features are the same.

Therefore, the adjustment coefficient can be calculated according to the image processing weight total ratio of the person object corresponding to different genders in the first image frame, so that the image processing parameters are adjusted according to the adjustment coefficient and the difference image processing parameters in the image processing parameters of different genders, and more accurate target image processing parameters are obtained. Therefore, when the face images of the person objects with different genders are subjected to image processing, the situation that the image processing distortion occurs in the image frame can be effectively avoided, and the satisfaction degree of a user on the target image frame can be improved.

It should be understood that in the embodiment of the present application, the input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics Processing Unit 8041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The first display module 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes a touch panel 8071 and other input devices 8072. A touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two portions of a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 809 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.

The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.

The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer-readable storage media, such as Read-Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, etc.

The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above-mentioned embodiment of the image processing method, and the same technical effect can be achieved.

It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.

It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.

Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.

While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

29页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:8K录播视频的传输控制方法、装置、存储介质及设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类