Image processing method and device, electronic device and storage medium

文档序号:1954557 发布日期:2021-12-10 浏览:7次 中文

阅读说明:本技术 图像处理方法和装置、电子设备及存储介质 (Image processing method and device, electronic device and storage medium ) 是由 王磊 于 2021-08-23 设计创作,主要内容包括:本公开关于一种图像处理方法和装置、电子设备及存储介质,其中,该方法包括:对待处理的原始图像执行分割操作,得到原始图像中的前景图和背景图;对前景图执行第一滤镜处理,得到第一滤镜图像,并对背景图执行第二滤镜处理,得到第二滤镜图像,第一滤镜图像和第二滤镜图像具有相同的滤镜风格类型;对第一滤镜图像和第二滤镜图像执行融合操作,得到目标图像。本公开解决了风格化的图像中边界线较突出的问题。(The present disclosure relates to an image processing method and apparatus, an electronic device, and a storage medium, wherein the method includes: performing segmentation operation on an original image to be processed to obtain a foreground image and a background image in the original image; performing first filter processing on the foreground image to obtain a first filter image, and performing second filter processing on the background image to obtain a second filter image, wherein the first filter image and the second filter image have the same filter style type; and performing fusion operation on the first filter image and the second filter image to obtain a target image. The present disclosure solves the problem of more prominent boundary lines in stylized images.)

1. An image processing method, comprising:

executing segmentation operation on an original image to be processed to obtain a foreground image and a background image in the original image;

performing first filter processing on the foreground image to obtain a first filter image, and performing second filter processing on the background image to obtain a second filter image, wherein the first filter image and the second filter image have the same filter style type;

and executing fusion operation on the first filter image and the second filter image to obtain a target image.

2. The method of claim 1, wherein said performing a fusion operation on said first filter image and said second filter image to obtain a target image comprises:

adjusting gradient values corresponding to a first boundary in a first gradient field of the first filter image so that the first filter image becomes a third filter image, and adjusting gradient values corresponding to a second boundary in a second gradient field of the second filter image so that the second filter image becomes a fourth filter image, wherein the first boundary in the foreground image is connected with the second boundary in the background image, and a difference value between a gradient value corresponding to the first boundary in a gradient field of the third filter image and a gradient value corresponding to the second boundary in a gradient field of the fourth filter image is smaller than a predetermined threshold value;

and performing fusion operation on the third filter image and the fourth filter image to obtain the target image, wherein the first boundary in the third filter image is connected with the second boundary in the fourth filter image.

3. The method of claim 1, wherein the performing a first filter process on the foreground map to obtain a first filter image comprises:

searching a preset relation table for a converted pixel value corresponding to a pixel value of a pixel point in the foreground image, wherein the preset relation table records a plurality of groups of pixel values before conversion and pixel values after conversion which have corresponding relations, and the pixel values after conversion in the preset relation table are pixel values under the filter style type;

and replacing the pixel value of each pixel point in the foreground image with the corresponding converted pixel value to obtain the first filter image.

4. The method of claim 3,

the searching for the converted pixel value corresponding to the pixel value of the pixel point in the foreground image in the preset relation table includes: determining at least two local regions in the foreground map; determining pre-conversion pixel values for each of the at least two local regions; searching a converted pixel value corresponding to the pixel value before conversion of each local area in the preset relation table;

the replacing the pixel value of each pixel point in the foreground image with the corresponding converted pixel value to obtain the first filter image includes: and replacing the pixel value of each pixel point in each local area of the at least two local areas with the corresponding converted pixel value to obtain the first filter image.

5. The method of claim 1, wherein the performing a second filter process on the background map to obtain a second filter image comprises:

inputting the background image into a trained neural network model to obtain an adjusted image, wherein the neural network model is used for adjusting the histogram distribution of the background image;

when the brightness of the adjusted image is within a preset brightness range, determining the adjusted image as the second filter image;

and when the brightness of the adjusted image is not within the preset brightness range, performing brightness correction operation on the adjusted image to obtain the second filter image.

6. The method of claim 5, wherein performing a brightness correction operation on the adjusted image to obtain the second filter image comprises:

increasing a brightness value of a first pixel in the adjusted image by a first adjustment value, wherein the first pixel is a pixel of which the brightness value is smaller than or equal to a first threshold value in the adjusted image;

and reducing the brightness value of a second pixel in the adjusted image by a second adjustment value, wherein the second pixel is a pixel of which the brightness value is greater than or equal to a second threshold value in the adjusted image.

7. The method of claim 2, wherein the adjusting the gradient value corresponding to the first boundary in the first gradient field of the first filter image so that the first filter image becomes a third filter image and the adjusting the gradient value corresponding to the second boundary in the second gradient field of the second filter image so that the second filter image becomes a fourth filter image comprises:

in a case where a difference between a gradient value corresponding to the first boundary in the first gradient field and a gradient value corresponding to the second boundary in the second gradient field is greater than or equal to the predetermined threshold value, a gradient value corresponding to the first boundary in the first gradient field of the first filter image is adjusted so that the first filter image becomes the third filter image, and a gradient value corresponding to the second boundary in the second gradient field of the second filter image is adjusted so that the second filter image becomes the fourth filter image.

8. An image processing apparatus characterized by comprising:

the image segmentation method comprises an execution unit, a segmentation unit and a segmentation unit, wherein the execution unit is configured to execute segmentation operation on an original image to be processed to obtain a foreground image and a background image in the original image;

the processing unit is configured to execute first filter processing on the foreground image to obtain a first filter image, and execute second filter processing on the background image to obtain a second filter image, wherein the first filter image and the second filter image have the same filter style type;

and the fusion unit is configured to perform fusion operation on the first filter image and the second filter image to obtain a target image.

9. An electronic device, comprising:

a processor;

a memory for storing the processor-executable instructions;

wherein the processor is configured to execute the instructions to implement the image processing method of any one of claims 1 to 7.

10. A computer readable storage medium, instructions in which, when executed by a processor of an electronic device, enable an image processing electronic device to perform the image processing method of any one of claims 1 to 7.

Technical Field

The present disclosure relates to the field of computers, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.

Background

In the special effect playing method of the mobile terminal, the stylization of the image is widely favored by users. For example, adding a corresponding filter to an image in an application such as a cropping, short video, etc., may style the image.

In the prior art, the image is stylized, and generally, the whole image is stylized uniformly. However, the brightness values of different pixels in the whole image are different, for example, the brightness of the image is different between the foreground and the background, the unified stylization of the whole image can make the boundary line between the person and the background more prominent, and the stylized effect of the image is difficult to meet the requirements of the user.

Therefore, no effective solution exists for the problem of the related art that the boundary line in the stylized image is prominent.

BRIEF SUMMARY OF THE PRESENT DISCLOSURE

The present disclosure provides an image processing method and apparatus, an electronic device, and a storage medium, to at least solve a problem in a related art that a boundary line in a rasterized image is prominent. The technical scheme of the disclosure is as follows:

according to a first aspect of embodiments of the present disclosure, there is provided an image processing method, including: executing segmentation operation on an original image to be processed to obtain a foreground image and a background image in the original image; performing first filter processing on the foreground image to obtain a first filter image, and performing second filter processing on the background image to obtain a second filter image, wherein the first filter image and the second filter image have the same filter style type; and executing fusion operation on the first filter image and the second filter image to obtain a target image.

Optionally, the performing a fusion operation on the first filter image and the second filter image to obtain a target image includes: adjusting gradient values corresponding to a first boundary in a first gradient field of the first filter image so that the first filter image becomes a third filter image, and adjusting gradient values corresponding to a second boundary in a second gradient field of the second filter image so that the second filter image becomes a fourth filter image, wherein the first boundary in the foreground image is connected with the second boundary in the background image, and a difference value between a gradient value corresponding to the first boundary in a gradient field of the third filter image and a gradient value corresponding to the second boundary in a gradient field of the fourth filter image is smaller than a predetermined threshold value; and performing fusion operation on the third filter image and the fourth filter image to obtain the target image, wherein the first boundary in the third filter image is connected with the second boundary in the fourth filter image.

Optionally, the performing a first filter process on the foreground image to obtain a first filter image includes: searching a preset relation table for a converted pixel value corresponding to a pixel value of a pixel point in the foreground image, wherein the preset relation table records a plurality of groups of pixel values before conversion and pixel values after conversion which have corresponding relations, and the pixel values after conversion in the preset relation table are pixel values under the filter style type; and replacing the pixel value of each pixel point in the foreground image with the corresponding converted pixel value to obtain the first filter image.

Optionally, the searching for the converted pixel value corresponding to the pixel value of the pixel point in the foreground map in the preset relationship table includes: determining at least two local regions in the foreground map; determining pre-conversion pixel values for each of the at least two local regions; searching a converted pixel value corresponding to the pixel value before conversion of each local area in the preset relation table; the replacing the pixel value of each pixel point in the foreground image with the corresponding converted pixel value to obtain the first filter image includes: and replacing the pixel value of each pixel point in each local area of the at least two local areas with the corresponding converted pixel value to obtain the first filter image.

Optionally, the determining the pre-conversion pixel value of each of the at least two local regions comprises: determining the mean value of the pixel values of all the pixel points in each local area as the pixel value of each local area before conversion; or determining the pixel value of a randomly selected pixel point in each local area as the pixel value of each local area before conversion; or determining the average value of the pixel values of a plurality of randomly selected pixel points in each local area as the pixel value before conversion of each local area.

Optionally, the performing a second filter process on the background image to obtain a second filter image includes: inputting the background image into a trained neural network model to obtain an adjusted image, wherein the neural network model is used for adjusting the histogram distribution of the background image; when the brightness of the adjusted image is within a preset brightness range, determining the adjusted image as the second filter image; and when the brightness of the adjusted image is not within the preset brightness range, performing brightness correction operation on the adjusted image to obtain the second filter image.

Optionally, the performing a brightness correction operation on the adjusted image to obtain the second filter image includes: increasing a brightness value of a first pixel in the adjusted image by a first adjustment value, wherein the first pixel is a pixel of which the brightness value is smaller than or equal to a first threshold value in the adjusted image; and reducing the brightness value of a second pixel in the adjusted image by a second adjustment value, wherein the second pixel is a pixel of which the brightness value is greater than or equal to a second threshold value in the adjusted image.

Optionally, the adjusting the gradient value corresponding to the first boundary in the first gradient field of the first filter image so that the first filter image becomes a third filter image and the adjusting the gradient value corresponding to the second boundary in the second gradient field of the second filter image so that the second filter image becomes a fourth filter image includes: in a case where a difference between a gradient value corresponding to the first boundary in the first gradient field and a gradient value corresponding to the second boundary in the second gradient field is greater than or equal to the predetermined threshold value, a gradient value corresponding to the first boundary in the first gradient field of the first filter image is adjusted so that the first filter image becomes the third filter image, and a gradient value corresponding to the second boundary in the second gradient field of the second filter image is adjusted so that the second filter image becomes the fourth filter image.

Optionally, adjusting a gradient value corresponding to the first boundary in the first gradient field of the first filter image and adjusting a gradient value corresponding to the second boundary in the second gradient field of the second filter image comprises: determining the mean value of the gradient values corresponding to the first boundary and the second boundary to obtain an average gradient value; and adjusting the gradient value corresponding to the first boundary and the gradient value corresponding to the second boundary to be the average gradient value.

According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including: the image segmentation method comprises an execution unit, a segmentation unit and a segmentation unit, wherein the execution unit is configured to execute segmentation operation on an original image to be processed to obtain a foreground image and a background image in the original image; the processing unit is configured to execute first filter processing on the foreground image to obtain a first filter image, and execute second filter processing on the background image to obtain a second filter image, wherein the first filter image and the second filter image have the same filter style type; and the fusion unit is configured to perform fusion operation on the first filter image and the second filter image to obtain a target image.

Optionally, the above apparatus is further configured to perform adjusting a gradient value corresponding to a first boundary in a first gradient field of the first filter image so that the first filter image becomes a third filter image, and adjusting a gradient value corresponding to a second boundary in a second gradient field of the second filter image so that the second filter image becomes a fourth filter image, wherein the first boundary in the foreground image is connected to the second boundary in the background image, a difference value between the gradient value corresponding to the first boundary in the gradient field of the third filter image and the gradient value corresponding to the second boundary in the gradient field of the fourth filter image is smaller than a predetermined threshold; and performing fusion operation on the third filter image and the fourth filter image to obtain the target image, wherein the first boundary in the third filter image is connected with the second boundary in the fourth filter image.

Optionally, the apparatus is further configured to perform searching for a converted pixel value corresponding to a pixel value of a pixel point in the foreground image in a preset relationship table, where multiple sets of pre-conversion pixel values and converted pixel values having a corresponding relationship are recorded in the preset relationship table, and the converted pixel value in the preset relationship table is a pixel value in the filter style type; and replacing the pixel value of each pixel point in the foreground image with the corresponding converted pixel value to obtain the first filter image.

Optionally, the apparatus is further configured to perform determining at least two local regions in the foreground map; determining pre-conversion pixel values for each of the at least two local regions; searching a converted pixel value corresponding to the pixel value before conversion of each local area in the preset relation table; and replacing the pixel value of each pixel point in each local area of the at least two local areas with the corresponding converted pixel value to obtain the first filter image.

Optionally, the apparatus is further configured to determine an average value of pixel values of respective pixel points in each local region as a pre-conversion pixel value of each local region; determining the pixel value of a randomly selected pixel point in each local area as the pixel value of each local area before conversion; and determining the average value of the pixel values of a plurality of randomly selected pixel points in each local area as the pixel value before conversion of each local area.

Optionally, the apparatus is further configured to perform inputting the background map into a trained neural network model to obtain an adjusted image, where the neural network model is used to adjust histogram distribution of the background map; when the brightness of the adjusted image is within a preset brightness range, determining the adjusted image as the second filter image; and when the brightness of the adjusted image is not within the preset brightness range, performing brightness correction operation on the adjusted image to obtain the second filter image.

Optionally, the apparatus is further configured to perform increasing a brightness value of a first pixel in the adjusted image by a first adjustment value, where the first pixel is a pixel in the adjusted image whose brightness value is less than or equal to a first threshold; and reducing the brightness value of a second pixel in the adjusted image by a second adjustment value, wherein the second pixel is a pixel of which the brightness value is greater than or equal to a second threshold value in the adjusted image.

Optionally, the above apparatus is further configured to perform, in a case where a difference between a gradient value corresponding to the first boundary in the first gradient field and a gradient value corresponding to the second boundary in the second gradient field is greater than or equal to the predetermined threshold, adjusting a gradient value corresponding to the first boundary in the first gradient field of the first filter image so that the first filter image becomes the third filter image, and adjusting a gradient value corresponding to the second boundary in the second gradient field of the second filter image so that the second filter image becomes the fourth filter image.

Optionally, the apparatus is further configured to perform determining an average of the gradient values corresponding to the first boundary and the second boundary, to obtain an average gradient value; and adjusting the gradient value corresponding to the first boundary and the gradient value corresponding to the second boundary to be the average gradient value.

According to a third aspect of the embodiments of the present disclosure, there is provided an image processing electronic apparatus including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the image processing method as described above.

According to still another aspect of the embodiments of the present disclosure, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the above-mentioned image processing method when running.

The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: the foreground and the background of the image are segmented, and the foreground and the background are stylized respectively, so that the stylization effect of the foreground and the background can be improved; the gradient value of the boundary line of the foreground and the background is adjusted, so that the problem of the prominent boundary line of the foreground and the background can be avoided.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.

FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment;

FIG. 2 is a schematic diagram illustrating image segmentation in accordance with an exemplary embodiment;

FIG. 3 is a diagram illustrating a CCNet model architecture in accordance with an exemplary embodiment;

FIG. 4 is an overall flow diagram shown in accordance with an exemplary embodiment;

FIG. 5 is a diagram illustrating a Mask R-CNN framework in accordance with an exemplary embodiment;

FIG. 6 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment;

FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment.

Detailed Description

In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.

It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.

As an alternative implementation, fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment, as shown in fig. 1, the image processing method includes the following steps.

In step S11, a segmentation operation is performed on an original image to be processed to obtain a foreground image and a background image in the original image;

the original image to be processed may be an image captured by a camera of the mobile terminal, or an image captured in an application program, such as a video image captured in a short video application. The original image comprises a foreground image and a background image, and a first boundary in the foreground image is connected with a second boundary in the background image. For example, when a person is photographed, the person may be a foreground image and an image excluding the person may be a background image. Fig. 2 is a schematic diagram of image segmentation according to an exemplary embodiment, which may be performed by using an image segmentation algorithm to perform image segmentation on an original image to obtain a foreground image and a background image. The first border of the foreground map and the second border of the background map are connected together in the original image. In the embodiment, the foreground image and the background image of the original image can be obtained by performing segmentation operation on the original image, so that the foreground image and the background image are stylized respectively, a stylized image effect is better, and the effect of meeting the requirements of users is better achieved.

In step S12, performing a first filter process on the foreground image to obtain a first filter image, and performing a second filter process on the background image to obtain a second filter image, where the first filter image and the second filter image have the same filter style type;

the filter processing modes of the foreground image and the background image can be the same or different, for example, the foreground image and the background image are stylized by using a look-up table (LUT) table, or the foreground image is stylized by using a LUT table, and the background image is stylized by using a histogram network. The style of the filter added by the foreground image and the background image is the same, such as a cartoon style filter. In this embodiment, the same filter style filter is added to the foreground image and the background image, and since the filter processing is respectively performed on the foreground image and the background image of the original image, the added filter can be more attached to the foreground image and the background image, so that the stylized image is more practical and meets the user requirements.

In step S13, a fusion operation is performed on the first filter image and the second filter image to obtain a target image.

The fusion operation may be to fuse the stylized foreground image and the background image, that is, to fuse the third filter image and the fourth filter image. Specifically, the first boundary in the third filter image and the second boundary in the fourth filter image may be stitched to obtain the target image after the original image is overall stylized. Because the foreground image and the background image of the original image are stylized respectively, the filter effect is more fit with the foreground image and the background image, the stylized effect is better, the stylized target image obtained after splicing is more practical, and the user requirements are met.

Optionally, a gradient value corresponding to a first boundary in a first gradient field of the first filter image is adjusted so that the first filter image becomes a third filter image, and a gradient value corresponding to a second boundary in a second gradient field of the second filter image is adjusted so that the second filter image becomes a fourth filter image, wherein the first boundary in the foreground image is connected to the second boundary in the background image, and a difference value between the gradient value corresponding to the first boundary in the gradient field of the third filter image and the gradient value corresponding to the second boundary in the gradient field of the fourth filter image is smaller than a predetermined threshold value; and performing fusion operation on the third filter image and the fourth filter image to obtain the target image, wherein the first boundary in the third filter image is connected with the second boundary in the fourth filter image.

As an alternative embodiment, in order to prevent the problem of the boundary of the foreground map and the background map being highlighted, the gradient values of the boundaries of the stylized first filter image and the second filter image may be adjusted. Specifically, a gradient value corresponding to the first boundary in the first filter image and a gradient value corresponding to the second boundary in the second filter image may be calculated. If the difference between the two gradient values is less than the predetermined threshold, no adjustment may be necessary. If the difference is greater than the predetermined threshold, the gradient value corresponding to the first boundary in the first filter image needs to be adjusted to obtain a third filter image. And adjusting the gradient value corresponding to the second boundary in the second filter image to obtain a fourth filter image. The size of the predetermined threshold may be determined according to actual circumstances. In this embodiment, by adjusting the boundary values of the stylized foreground image and the stylized background image, the boundary lines at the splicing positions in the stylized images obtained by splicing can be more fit with the foreground image and the background image, the problem of prominent boundary lines of the stylized images is avoided, and the stylized images are better in effect.

Optionally, the performing a first filter process on the foreground image to obtain a first filter image includes: searching a preset relation table for a converted pixel value corresponding to a pixel value of a pixel point in the foreground image, wherein the preset relation table records a plurality of groups of pixel values before conversion and pixel values after conversion which have corresponding relations, and the pixel values after conversion in the preset relation table are pixel values under the filter style type; and replacing the pixel value of each pixel point in the foreground image with the corresponding converted pixel value to obtain the first filter image.

As an optional embodiment, the foreground map may be stylized by using a table look-up method, and the preset relationship table may be a preset LUT table, in which a corresponding relationship between a pixel value before stylization and a stylized pixel value under a certain style filter is recorded. And searching a corresponding stylized pixel value in an LUT (look-up table) according to the pixel value of a pixel point in the foreground image, and replacing the pixel value in the foreground image with the corresponding stylized pixel value to obtain a first filter image which stylizes the foreground image. In this embodiment, the foreground image is stylized separately, and the stylized pixel values corresponding to the foreground image are searched in the preset relationship table, so that the stylized effect of the foreground image can be improved, and the stylized efficiency of the image can be improved.

Optionally, the searching for the converted pixel value corresponding to the pixel value of the pixel point in the foreground map in the preset relationship table includes: determining at least two local regions in the foreground map; determining pre-conversion pixel values for each of the at least two local regions; searching a converted pixel value corresponding to the pixel value before conversion of each local area in the preset relation table; the replacing the pixel value of each pixel point in the foreground image with the corresponding converted pixel value to obtain the first filter image includes: and replacing the pixel value of each pixel point in each local area of the at least two local areas with the corresponding converted pixel value to obtain the first filter image.

As an alternative implementation, the foreground image may be segmented to obtain a plurality of local regions. Taking the foreground image as the human image as an example, the human body can be segmented. In particular, the human body may be segmented using a neural network model, which may be CCNet. The neural network model may be trained using a large amount of training data, which may include various parts of the human body. The human body can be divided into different parts such as arms, hairs, faces, legs and the like by using the trained neural network model, and each part of the human body corresponds to each local area. Fig. 3 is a schematic diagram of a CCNet model structure shown in an exemplary embodiment, a foreground image is input to a neural network model, and a segmented image is obtained by feature extraction and data processing of the foreground image by the neural network model, where the segmented image includes at least two local regions. The corresponding stylized pixel values may be looked up in the LUT table according to the image pixel values of each local region, so that each local region may be stylized, the pixel values of all local regions are replaced with the stylized pixel values, and the first filter image stylized for the foreground map may be obtained. In this embodiment, by dividing the foreground image into different local regions and stylizing the different local regions respectively, each part of the stylized image is more fit with the original image, which is more practical and has better effect.

Optionally, the determining the pre-conversion pixel value of each of the at least two local regions comprises: determining the mean value of the pixel values of all the pixel points in each local area as the pixel value of each local area before conversion; or determining the pixel value of a randomly selected pixel point in each local area as the pixel value of each local area before conversion; or determining the average value of the pixel values of a plurality of randomly selected pixel points in each local area as the pixel value before conversion of each local area.

As an optional implementation, the whole pixels in each local region may be stylized, the pixel value mean of each pixel point in each local region is taken, the stylized pixel value corresponding to the pixel value mean is searched in the LUT, the whole pixel point of each local image is replaced by the stylized pixel value searched in the LUT, and the stylized image of the local region is obtained. Taking the local area as the face of the human body as an example, the stylized image of the face area can be obtained by averaging each pixel value of the face area, looking up the stylized pixel value corresponding to the average value in the LUT table, and replacing each pixel value of the face area with the stylized pixel value. In this embodiment, by taking the mean value of the pixel values of the local area and looking up the stylized pixel values corresponding to the mean value in the LUT, the local areas in the image can be stylized uniformly, thereby improving the efficiency of stylizing the image.

As an optional implementation, the pixel value of a pixel point may also be randomly selected in each local area, the stylized pixel value corresponding to the pixel value of the random pixel point is searched in the LUT, and the whole pixel point of each local image is replaced by the stylized pixel value, so as to obtain the stylized image of the local area. Taking an arm with a local area as a human body as an example, a pixel point can be randomly selected in the arm area of the foreground image, a stylized pixel value corresponding to a pixel value of the pixel point is searched in an LUT (look-up table), each pixel value of the arm area is replaced by the stylized pixel value, and the stylized image of the arm area can be obtained. In this embodiment, by randomly selecting pixel values in the local area and looking up the stylized pixel values corresponding to the mean value in the LUT, the local area in the image can be stylized uniformly, and the image stylization efficiency is improved.

As an optional implementation, several pixel points may also be randomly selected in each local area, the pixel values of the selected several pixel points are averaged, the stylized pixel value corresponding to the average value is searched in the LUT, and the whole pixel points of the local image are all replaced with the stylized pixel values, so as to obtain the stylized image of the local area. Taking a local area as a leg of a human body as an example, several pixel points can be randomly selected in the leg area of the foreground image, a stylized pixel value corresponding to the average pixel value is searched in an LUT (look-up table), and each pixel value of the leg area is replaced by the stylized pixel value, so that the stylized image of the leg area can be obtained. In this embodiment, the whole local areas in the foreground image can be stylized, and the stylization efficiency of the foreground image can be improved.

Optionally, the performing a second filter process on the background image to obtain a second filter image includes: inputting the background image into a trained neural network model to obtain an adjusted image, wherein the neural network model is used for adjusting the histogram distribution of the background image; when the brightness of the adjusted image is within a preset brightness range, determining the adjusted image as the second filter image; and when the brightness of the adjusted image is not within the preset brightness range, performing brightness correction operation on the adjusted image to obtain the second filter image.

As an alternative embodiment, the neural network model may be a network model obtained through machine learning training, and the histogram network may adjust the histogram distribution of the original background image. And inputting the background image of the original image into the histogram network to obtain the background image after the output of the histogram network is adjusted. Whether the preset condition is met can be judged according to the brightness of the adjusted background image output by the histogram network, and the preset condition can be whether the brightness meets the preset condition. And under the condition that the brightness of the adjusted image output by the histogram network meets the preset condition, the image output by the histogram network is the second filter image for stylizing the background image. And under the condition that the brightness of the adjusted image output by the histogram network does not meet the preset condition, the brightness of the adjusted image can be adjusted, the brightness of darker pixels can be adjusted, and the brightness of brighter pixels can be adjusted. In this embodiment, the background map of the original image can be stylized through the histogram network, and moreover, the effect of stylizing the background map can be improved by adjusting the image output based on the histogram network.

Optionally, the performing a brightness correction operation on the adjusted image to obtain the second filter image includes: increasing a brightness value of a first pixel in the adjusted image by a first adjustment value, wherein the first pixel is a pixel of which the brightness value is smaller than or equal to a first threshold value in the adjusted image; and reducing the brightness value of a second pixel in the adjusted image by a second adjustment value, wherein the second pixel is a pixel of which the brightness value is greater than or equal to a second threshold value in the adjusted image.

As an alternative embodiment, the first threshold and the second threshold may be determined according to actual conditions, the first pixel may be a pixel with lower brightness in the adjusted image output by the network histogram, and the second pixel may be a pixel with higher brightness in the adjusted image output by the network histogram. In this embodiment, the pixels with lower brightness in the adjusted image output by the histogram network may be brightened, and the brightness may be reduced for the pixels with higher brightness. In this embodiment, the histogram of the adjusted image can be divided into five levels (bright, highlight, normal, dark, black), the bright range is 200-. A picture of a certain filter style is captured in advance, and then histogram curves of dark and bright portions are fitted. And then as a judgment criterion for evaluating the histogram directly output by the network. The pixel portion with lower luminance is brightened, and the bright portion (overexposure) lowers the luminance.

The pixel values calculated for the black range (0-49) in the histogram can be adjusted by the following equation:

the pixel values calculated for the bright range (200-:

where input is the pixel in the black range (0-49) in the adjusted image, and output is the adjusted pixel. In this embodiment, by adjusting the brightness of the stylized image output by the network histogram, the phenomena of over-darkness and over-exposure in the stylized image can be prevented, and the quality of the stylized image is improved.

Optionally, the adjusting the gradient value corresponding to the first boundary in the first gradient field of the first filter image so that the first filter image becomes a third filter image and the adjusting the gradient value corresponding to the second boundary in the second gradient field of the second filter image so that the second filter image becomes a fourth filter image includes: in a case where a difference between a gradient value corresponding to the first boundary in the first gradient field and a gradient value corresponding to the second boundary in the second gradient field is greater than or equal to the predetermined threshold value, a gradient value corresponding to the first boundary in the first gradient field of the first filter image is adjusted so that the first filter image becomes the third filter image, and a gradient value corresponding to the second boundary in the second gradient field of the second filter image is adjusted so that the second filter image becomes the fourth filter image.

As an alternative embodiment, the first filter image and the second filter image obtained by stylizing the foreground image and the background image may be poisson fused. Since there may be a large difference between pixels of the boundary of the stylized first filter image and the boundary of the second filter image, a problem of boundary protrusion may occur if the stitching is directly performed. The gradient values of the first filter image and the second filter image can be respectively calculated, and the gradient values of the first boundary and the second boundary are respectively adjusted according to the first boundary gradient value corresponding to the first filter image and the second boundary gradient value corresponding to the second filter image so as to avoid the problem of boundary protrusion. Specifically, if the difference between the gradient value of the first boundary corresponding to the first filter image and the gradient value of the second boundary of the second filter image is large, the gradient value of the first boundary and the gradient value of the second boundary may be adjusted, the difference between the gradient value of the first boundary and the gradient value of the second boundary is reduced, a third filter image and a fourth filter image are obtained, and the third filter image and the fourth filter image are spliced to obtain a stylized image. In this embodiment, by adjusting the boundary gradient values of the first filter image and the second filter image, the problem of boundary protrusion can be avoided, and the stylization effect is improved.

Optionally, adjusting a gradient value corresponding to the first boundary in the first gradient field of the first filter image and adjusting a gradient value corresponding to the second boundary in the second gradient field of the second filter image comprises: determining the mean value of the gradient values corresponding to the first boundary and the second boundary to obtain an average gradient value; and adjusting the gradient value corresponding to the first boundary and the gradient value corresponding to the second boundary to be the average gradient value.

As an alternative embodiment, the average gradient value can be obtained by taking the average of the gradient value of the first boundary in the first filter image and the gradient value of the second boundary in the second filter image. The gradient values of the first boundary of the first filter image and the gradient values of the second boundary of the second filter image are adjusted to an average gradient value. Pixel values of the first boundary and the second boundary can be obtained according to the average gradient value, so that a stylized foreground image third filter image and a stylized background image fourth filter image can be obtained. And fusing the third filter image and the fourth filter image to obtain the stylized image of the original image. In the embodiment, the problem of salient boundaries in the stylized image can be avoided by taking the gradient value mean value of the stylized foreground image and background image boundaries, and the effect of the stylized image is improved.

The present application is explained below by a specific embodiment, and fig. 4 is a schematic overall flow chart shown according to an exemplary embodiment, wherein the method includes the following steps:

and step S41, separating the front background from the back background of the original image to separate a foreground image and a background image. Specifically, the neural network model can be used for performing front-back background separation on the original image, and the front-back background separation can be performed on the basis of Mask R-CNN. FIG. 5 is a block R-CNN framework diagram, as shown in an exemplary embodiment, where FPN is a Feature growth Network Feature Pyramid Network and RPN is a Region candidate Network Region Proposal Network. Rol is the Region of interest. And outputting the original image to a Mask R-CNN network model, and segmenting the original image by the Mask R-CNN network model to obtain a foreground image and a background image.

And step S42, formatting the foreground. The foreground map may be segmented into different regions by first performing segmentation processing on the foreground map. The different regions are stylized separately. Region segmentation can be based on CCNet. For example, a foreground image and a human body can be input to CCNet, and information of each part of the human body (arm, hair, face, upper body, and leg) can be distinguished by CCNet. And finding out the corresponding stylized pixel values in the LUT look-up table according to the pixel values of the different areas, and replacing the pixel values of the foreground image with the stylized pixel values to obtain the stylized foreground image.

In step S43, the background is stylized. And (3) the segmented background image passes through a pre-trained histogram network, and histogram distribution adjustment is carried out on the original background image to obtain an adjusted background image output by the histogram network. The histogram network is obtained by taking pictures of different scenes as training data and repeatedly training the pictures. And (4) carrying out histogram check or correction on the picture after the histogram network correction. The histogram of the adjusted background image can be divided into five levels (bright, highlight, normal, dark, black), the bright range is 200-. And (4) grabbing a new Haichi style picture in advance, and fitting a histogram curve of a dark part and a bright part. And then as a judgment criterion for evaluating the histogram directly output by the network. The adjusted portions are mainly black portions brightened, and bright portions (overexposure) lowered in brightness.

Step S54, stylized front and back context fusion. The gradient values of the stylized foreground map are calculated, as well as the gradient values of the stylized background map. And comparing the gradient values of the boundaries of the stylized foreground image and the background image, and calculating a gradient mean value if the gradient difference value of the boundaries of the stylized foreground image and the boundaries of the background image is larger. And replacing the boundary gradient values of the stylized foreground image and the background image with a gradient mean value, and obtaining a pixel value corresponding to the gradient mean value through a Poisson equation. Thereby obtaining pixel values of the boundaries of the stylized foreground image and the background image. And splicing the stylized foreground image and the background image to obtain the stylized image of the original image.

Fig. 6 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment. Referring to fig. 6, the apparatus includes an execution unit 61, a processing unit 62, and a fusion unit 63.

The execution unit 61 is configured to execute a segmentation operation on an original image to be processed, so as to obtain a foreground image and a background image in the original image;

the processing unit 62 is configured to perform a first filter process on the foreground image to obtain a first filter image, and perform a second filter process on the background image to obtain a second filter image, wherein the first filter image and the second filter image have the same filter style type;

the fusion unit 63 is configured to perform a fusion operation on the first filter image and the second filter image to obtain a target image.

Optionally, the above apparatus is further configured to perform adjusting a gradient value corresponding to a first boundary in a first gradient field of the first filter image so that the first filter image becomes a third filter image, and adjusting a gradient value corresponding to a second boundary in a second gradient field of the second filter image so that the second filter image becomes a fourth filter image, wherein the first boundary in the foreground image is connected to the second boundary in the background image, a difference value between the gradient value corresponding to the first boundary in the gradient field of the third filter image and the gradient value corresponding to the second boundary in the gradient field of the fourth filter image is smaller than a predetermined threshold; and performing fusion operation on the third filter image and the fourth filter image to obtain the target image, wherein the first boundary in the third filter image is connected with the second boundary in the fourth filter image.

Optionally, the apparatus is further configured to perform searching for a converted pixel value corresponding to a pixel value of a pixel point in the foreground image in a preset relationship table, where multiple sets of pre-conversion pixel values and converted pixel values having a corresponding relationship are recorded in the preset relationship table, and the converted pixel value in the preset relationship table is a pixel value in the filter style type; and replacing the pixel value of each pixel point in the foreground image with the corresponding converted pixel value to obtain the first filter image.

Optionally, the apparatus is further configured to perform determining at least two local regions in the foreground map; determining pre-conversion pixel values for each of the at least two local regions; searching a converted pixel value corresponding to the pixel value before conversion of each local area in the preset relation table; and replacing the pixel value of each pixel point in each local area of the at least two local areas with the corresponding converted pixel value to obtain the first filter image.

Optionally, the apparatus is further configured to determine an average value of pixel values of respective pixel points in each local region as a pre-conversion pixel value of each local region; determining the pixel value of a randomly selected pixel point in each local area as the pixel value of each local area before conversion; and determining the average value of the pixel values of a plurality of randomly selected pixel points in each local area as the pixel value before conversion of each local area.

Optionally, the apparatus is further configured to perform inputting the background map into a trained neural network model to obtain an adjusted image, where the neural network model is used to adjust histogram distribution of the background map; when the brightness of the adjusted image is within a preset brightness range, determining the adjusted image as the second filter image; and when the brightness of the adjusted image is not within the preset brightness range, performing brightness correction operation on the adjusted image to obtain the second filter image.

Optionally, the apparatus is further configured to perform increasing a brightness value of a first pixel in the adjusted image by a first adjustment value, where the first pixel is a pixel in the adjusted image whose brightness value is less than or equal to a first threshold; and reducing the brightness value of a second pixel in the adjusted image by a second adjustment value, wherein the second pixel is a pixel of which the brightness value is greater than or equal to a second threshold value in the adjusted image.

Optionally, the above apparatus is further configured to perform, in a case where a difference between a gradient value corresponding to the first boundary in the first gradient field and a gradient value corresponding to the second boundary in the second gradient field is greater than or equal to the predetermined threshold, adjusting a gradient value corresponding to the first boundary in the first gradient field of the first filter image so that the first filter image becomes the third filter image, and adjusting a gradient value corresponding to the second boundary in the second gradient field of the second filter image so that the second filter image becomes the fourth filter image.

Optionally, the apparatus is further configured to perform determining an average of the gradient values corresponding to the first boundary and the second boundary, to obtain an average gradient value; and adjusting the gradient value corresponding to the first boundary and the gradient value corresponding to the second boundary to be the average gradient value.

FIG. 7 is a block diagram illustrating an electronic device for image processing in accordance with an exemplary embodiment. As shown in fig. 7, the electronic device includes a processor 720 and a memory 710 for storing processor-executable instructions as described above. The processor is configured to execute instructions to implement the image processing method described above. The electronic device in this embodiment may further include a transmission device 730, a display 740, and a connection bus 750. The transmission device 730 is used for receiving or transmitting data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 730 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices so as to communicate with the internet or a local area Network. In one example, the transmission device 730 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner. The display 740 is used for displaying the original image and the target image; the connection bus 750 is used for connecting each module component in the electronic device.

In an exemplary embodiment, a storage medium comprising instructions, such as the memory 710 comprising instructions, executable by the processor 720 of the electronic device to perform the method described above is also provided. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.

In an exemplary embodiment, there is also provided a computer program product comprising computer programs/instructions which, when executed by a processor, implement the above-described information transmission method.

Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种改进的循环生成对抗网络实现皮影戏风格迁移的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!