Image blurring method, storage medium and terminal device

文档序号:1906066 发布日期:2021-11-30 浏览:4次 中文

阅读说明:本技术 一种图像虚化方法、存储介质以及终端设备 (Image blurring method, storage medium and terminal device ) 是由 任世强 李鹏 刘阳兴 于 2020-05-25 设计创作,主要内容包括:本发明公开了一种图像虚化方法、存储介质以及终端设备,所述方法基于预设焦点确定待处理图像对应的待虚化区域;对于所述待虚化区域中的每个第一像素点,确定该第一像素点对应的虚化半径;根据该第一像素点的虚化半径,在预设的若干待虚化图像中确定该第一像素点对应的待虚化图像;分别对各待虚化图像进行虚化处理,以得到若干虚化图像,并将虚化处理得到的若干虚化图像以及所述待处理图像进行融合,以得到所述待处理图像对应的虚化图像。本发明通过对各像素点采用逐点滤波,并且将不同像素点在不同图像尺度的待虚化图像上进行处理,实现了将待处理图像缩小多个尺度进行虚化处理,提高了虚化处理的效率。(The invention discloses an image blurring method, a storage medium and a terminal device, wherein the method determines a to-be-blurred region corresponding to an image to be processed based on a preset focus; determining a blurring radius corresponding to each first pixel point in the region to be blurred; determining an image to be blurred corresponding to the first pixel point in a plurality of preset images to be blurred according to the blurring radius of the first pixel point; and respectively carrying out blurring processing on each image to be blurred to obtain a plurality of blurring images, and fusing the blurring images obtained by blurring processing and the image to be processed to obtain a blurring image corresponding to the image to be processed. According to the invention, point-by-point filtering is adopted for each pixel point, and different pixel points are processed on the to-be-blurred images with different image scales, so that the to-be-blurred images are reduced by multiple scales for blurring, and the blurring efficiency is improved.)

1. A method of blurring an image, the method comprising:

determining a to-be-blurred area corresponding to the to-be-processed image based on a preset focus;

determining a blurring radius corresponding to each first pixel point in the region to be blurred; determining an image to be blurred corresponding to the first pixel point in a plurality of preset images to be blurred according to the blurring radius of the first pixel point, wherein the plurality of images to be blurred are images of the image to be processed with different image scales;

respectively carrying out blurring processing on each image to be blurred to obtain a plurality of blurred images;

and fusing the plurality of the virtualized images obtained by the virtualization processing and the image to be processed to obtain a virtualized image corresponding to the image to be processed.

2. The image blurring method according to claim 1, wherein the image to be processed is a main image captured by a main imager in an imaging module, wherein the imaging module comprises at least the main imager and an auxiliary imager; the main imager is used for shooting a main image, and the auxiliary imager is used for shooting an auxiliary image which is used for assisting in calculating the depth information of the main image.

3. The image blurring method according to claim 1, before determining the region to be blurred corresponding to the image to be processed based on the preset focus, the method comprising:

determining a candidate region in the image to be processed based on a preset focus;

and performing correction processing on the preset focus corresponding to the candidate region, and taking the corrected preset focus as the preset focus.

4. The image blurring method according to claim 3, wherein the performing modification processing based on the candidate region corresponding to the preset focus specifically includes:

dividing the candidate area into a plurality of sub candidate areas, and respectively obtaining the depth mean value corresponding to each sub candidate area;

and determining a target area corresponding to the preset focus according to all the obtained depth mean values, and taking the area center corresponding to the target area as the preset focus after the correction processing.

5. The image blurring method according to claim 1, wherein the determining the region to be blurred corresponding to the image to be processed based on the preset focus specifically comprises:

for each second pixel point in the image to be processed, calculating a difference value between first depth information corresponding to the second pixel point and second depth information corresponding to the preset focus to obtain a first difference value set;

setting all negative number difference values in the first difference value set to be zero so as to update the first difference value set;

correcting the second depth information of the preset focus according to the updated first difference set to obtain corrected second depth information;

taking the corrected second depth information as second depth information of the preset focus;

and determining a region to be blurred corresponding to the image to be processed according to the second depth information.

6. The image blurring method according to claim 1, wherein, for each first pixel point in the region to be blurred, the determining a blurring radius corresponding to the first pixel point specifically includes:

for each first pixel point in the region to be blurred, calculating a difference value between third depth information of the first pixel point and second depth information of the preset focus;

and determining the virtualization radius corresponding to the difference value according to the corresponding relation between the preset virtualization radius and the difference value so as to obtain the virtualization radius corresponding to the first pixel point.

7. The image blurring method according to claim 6, wherein the correspondence between the blurring radius and the difference is:

when the difference value is zero, the blurring radius corresponding to the difference value is zero;

when the difference value is larger than or equal to a first blurring radius threshold value corresponding to the image to be processed, the blurring radius corresponding to the difference value is the first blurring radius threshold value;

and when the difference is larger than zero and the difference is smaller than the first virtual radius threshold, the virtual radius corresponding to the difference is determined according to the difference.

8. The image blurring method according to claim 1, wherein the blurring radius corresponding to each of the images to be blurred is different, and for any two images to be blurred, if the image scale of a first image to be blurred of the two images to be blurred is larger than that of a second image to be blurred, the blurring radius corresponding to the first image to be blurred is smaller than the blurring radius corresponding to the second image to be blurred.

9. The image blurring method according to claim 1, wherein the blurring each image to be blurred to obtain a plurality of blurred images specifically comprises:

determining a mask image corresponding to each image to be blurred, wherein in the mask image, pixel values of pixel points in an area to be blurred are first preset pixel values, pixel values of pixel points in a foreground area are second preset pixel values, and the foreground area is an image area except the area to be blurred in the image to be processed; and performing blurring processing on the image to be blurred based on the image to be blurred and the mask image corresponding to the image to be blurred to obtain a blurred image corresponding to the image to be blurred.

10. The image blurring method according to claim 9, wherein the blurring the image to be blurred based on the image to be blurred and the mask image corresponding to the image to be blurred, and obtaining the blurred image corresponding to the image to be blurred specifically comprises:

for each first pixel point in the image to be virtualized, acquiring a first image area corresponding to the virtualization processing core of the first pixel point in the mask image and a second image area in the image to be virtualized; updating the pixel value of each pixel point in the second image area according to the pixel value of each pixel point in the first image area to obtain an updated image to be blurred; and performing blurring processing on the updated image to be blurred to obtain a blurred image corresponding to the image to be blurred.

11. The image blurring method according to claim 1, wherein the images to be blurred comprise a first image to be blurred and a second image to be blurred, wherein the first image to be blurred is an image of a quarter of an image scale of the image to be blurred; the second image to be blurred is an image of one-half of the image scale of the image to be processed.

12. The image blurring method according to claim 1 or 11, wherein the fusing the plurality of blurred images obtained by blurring and the image to be processed to obtain the blurred image corresponding to the image to be processed specifically includes:

taking a first virtualized image in a plurality of virtualized image sequences as a target image and taking a second virtualized image as a reference image, wherein the image scale of the first virtualized image is smaller than that of the second virtualized image;

adjusting the image scale of the target image to the image scale of a reference image, and fusing the adjusted target image and the reference image to obtain a fused image;

and taking the fused image as a target image, taking a third virtualized image in the virtualized image sequence as a reference image, and continuing to execute the step of adjusting the image scale of the target image to the image scale of the reference image until the reference image is an image to be processed, wherein the image scale of the third virtualized image is larger than the image scale of the second virtualized image.

13. The image blurring method according to claim 1, wherein the using the fused image as a target image specifically includes:

and smoothing the target transition region of the fused image, and taking the fused image after the smoothing as a target image, wherein the target transition region is a transition edge between image regions corresponding to the blurring radii.

14. A computer-readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to perform the steps of the image blurring method according to any one of claims 1 to 13.

15. A terminal device, comprising: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;

the communication bus realizes connection communication between the processor and the memory;

the processor, when executing the computer readable program, implements the steps in the image blurring method according to any one of claims 1-13.

Technical Field

The present invention relates to the field of image processing technologies, and in particular, to an image blurring method, a storage medium, and a terminal device.

Background

Double cameras have been increasingly applied to mobile terminal devices, and in the prior art, one of the double cameras is used to take a picture, and the other camera is used to assist in calculating depth information of the picture, so as to perform subsequent image blurring processing. The image blurring processing process needs to distinguish the foreground from the background and process the background of the image, and the processing needs to take a long time, so that the image shooting time is prolonged, and inconvenience is brought to the use of a user.

Disclosure of Invention

The present invention is directed to provide an image blurring method, a storage medium, and a terminal device, which address the deficiencies of the prior art.

In order to solve the technical problems, the technical scheme adopted by the invention is as follows:

a method of image blurring, the method comprising:

determining a to-be-blurred area corresponding to the to-be-processed image based on a preset focus;

determining a blurring radius corresponding to each first pixel point in the region to be blurred; determining an image to be blurred corresponding to the first pixel point in a plurality of preset images to be blurred according to the blurring radius of the first pixel point, wherein the plurality of images to be blurred are images of the image to be processed with different image scales;

respectively carrying out blurring processing on each image to be blurred to obtain a plurality of blurred images;

and fusing the plurality of the virtualized images obtained by the virtualization processing and the image to be processed to obtain a virtualized image corresponding to the image to be processed.

The image blurring method comprises the steps that the image to be processed is a main image shot by a main imager in an imaging module, wherein the imaging module comprises at least the main imager and an auxiliary imager; the main imager is used for shooting a main image, and the auxiliary imager is used for shooting an auxiliary image which is used for assisting in calculating the depth information of the main image.

The image blurring method comprises the following steps of, before determining a region to be blurred corresponding to an image to be processed based on a preset focus, the method comprising:

determining a candidate region in the image to be processed based on a preset focus;

and performing correction processing on the preset focus corresponding to the candidate region, and taking the corrected preset focus as the preset focus.

The image blurring method, wherein the modifying process is performed on the basis of the candidate region corresponding to the preset focus, specifically includes:

dividing the candidate area into a plurality of sub candidate areas, and respectively obtaining the depth mean value corresponding to each sub candidate area;

and determining a target area corresponding to the preset focus according to all the obtained depth mean values, and taking the area center corresponding to the target area as the preset focus after the correction processing.

The image blurring method, wherein the determining of the to-be-blurred region corresponding to the to-be-processed image based on the preset focus specifically includes:

for each second pixel point in the image to be processed, calculating a difference value between first depth information corresponding to the second pixel point and second depth information corresponding to the preset focus to obtain a first difference value set;

setting all negative number difference values in the first difference value set to be zero so as to update the first difference value set;

correcting the second depth information of the preset focus according to the updated first difference set to obtain corrected second depth information;

taking the corrected second depth information as second depth information of the preset focus;

and determining a region to be blurred corresponding to the image to be processed according to the second depth information.

The image blurring method, wherein the determining, for each first pixel point in the region to be blurred, a blurring radius corresponding to the first pixel point specifically includes:

for each first pixel point in the region to be blurred, calculating a difference value between third depth information of the first pixel point and second depth information of the preset focus;

and determining the virtualization radius corresponding to the difference value according to the corresponding relation between the preset virtualization radius and the difference value so as to obtain the virtualization radius corresponding to the first pixel point.

The image blurring method is characterized in that the correspondence between the blurring radius and the difference is:

when the difference value is zero, the blurring radius corresponding to the difference value is zero;

when the difference value is larger than or equal to a first blurring radius threshold value corresponding to the image to be processed, the blurring radius corresponding to the difference value is the first blurring radius threshold value;

and when the difference is larger than zero and the difference is smaller than the first virtual radius threshold, the virtual radius corresponding to the difference is determined according to the difference.

In the image blurring method, the blurring radii corresponding to the images to be blurred are different, and for any two images to be blurred, if the image scale of the first image to be blurred in the two images to be blurred is larger than that of the second image to be blurred, the blurring radius corresponding to the first image to be blurred is smaller than the blurring radius corresponding to the second image to be blurred.

In the image blurring method, the blurring each image to be blurred to obtain a plurality of blurred images specifically includes:

determining a mask image corresponding to each image to be blurred, wherein in the image to be masked, pixel values of pixel points in a region to be blurred are first preset pixel values, pixel values of pixel points in a foreground region are second preset pixel values, and the foreground region is an image region except for the region to be blurred in the image to be processed; and performing blurring processing on the image to be blurred based on the image to be blurred and the mask image corresponding to the image to be blurred to obtain a blurred image corresponding to the image to be blurred.

The image blurring method includes, by the image to be blurred based on the image to be blurred and the mask image corresponding to the image to be blurred, and obtaining a blurred image corresponding to the image to be blurred specifically:

for each first pixel point in the image to be virtualized, acquiring a first image area corresponding to the virtualization processing core of the first pixel point in the mask image and a second image area in the image to be virtualized; updating the pixel value of each pixel point in the second image area according to the pixel value of each pixel point in the first image area to obtain an updated image to be blurred; and performing blurring processing on the updated image to be blurred to obtain a blurred image corresponding to the image to be blurred.

The image blurring method comprises the steps that the images to be blurred comprise a first image to be blurred and a second image to be blurred, wherein the first image to be blurred is an image with a quarter of image scale of the image to be processed; the second image to be blurred is an image of one-half of the image scale of the image to be processed.

The image blurring method, wherein the fusing the plurality of blurred images obtained by blurring with the to-be-processed image to obtain the blurred image corresponding to the to-be-processed image specifically includes:

taking a first virtualized image in a plurality of virtualized image sequences as a target image and taking a second virtualized image as a reference image, wherein the image scale of the first virtualized image is smaller than that of the second virtualized image;

adjusting the image scale of the target image to the image scale of a reference image, and fusing the adjusted target image and the reference image to obtain a fused image;

and taking the fused image as a target image, taking a third virtualized image in the virtualized image sequence as a reference image, and continuing to execute the step of adjusting the image scale of the target image to the image scale of the reference image until the reference image is an image to be processed, wherein the image scale of the third virtualized image is larger than the image scale of the second virtualized image.

The image blurring method, wherein the using the fused image as a target image specifically includes:

and smoothing the target transition region of the fused image, and taking the fused image after the smoothing as a target image, wherein the target transition region is a transition edge between image regions corresponding to the blurring radii.

A computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the steps in the image blurring method as claimed in any one of the preceding claims.

A terminal device, comprising: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;

the communication bus realizes connection communication between the processor and the memory;

the processor, when executing the computer readable program, implements the steps in the image blurring method as described in any one of the above.

Has the advantages that: compared with the prior art, the invention provides an image blurring method, a storage medium and a terminal device, wherein the method determines a to-be-blurred region corresponding to an image to be processed based on a preset focus; determining a blurring radius corresponding to each first pixel point in the region to be blurred; determining an image to be blurred corresponding to the first pixel point in a plurality of preset images to be blurred according to the blurring radius of the first pixel point; and respectively carrying out blurring processing on each image to be blurred to obtain a plurality of blurring images, and fusing the blurring images obtained by blurring processing and the image to be processed to obtain a blurring image corresponding to the image to be processed. According to the invention, point-by-point filtering is adopted for each pixel point, and different pixel points are processed on the to-be-blurred images with different image scales, so that the to-be-blurred images are reduced by multiple scales for blurring, and the blurring efficiency is improved.

Drawings

Fig. 1 is a flowchart of an image blurring method provided by the present invention.

Fig. 2 is an exemplary diagram of selecting a candidate region on a depth map in the image blurring method provided by the present invention.

Fig. 3 is another exemplary diagram of selecting a candidate region on a depth map in the image blurring method provided by the present invention.

Fig. 4 is an exemplary diagram of dividing a candidate region into a plurality of sub-candidate regions in the image blurring method provided by the present invention.

Fig. 5 is a schematic diagram of an image to be processed in the image blurring method provided by the present invention.

Fig. 6 is a schematic structural diagram of a terminal device provided in the present invention.

Detailed Description

The present invention provides an image blurring method, a storage medium, and a terminal device, and in order to make the objects, technical solutions, and effects of the present invention clearer and clearer, the present invention will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.

It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

In the preview image blurring method provided by this embodiment, an execution main body of the preview image blurring method may be an image blurring device or an electronic device integrated with the image blurring device, where the image blurring device may be implemented in a hardware or software manner. It is to be understood that the execution subject of the present embodiment may be a smart terminal such as a smart phone, a tablet computer, or a personal digital assistant, which is provided with an imaging module (e.g., a camera). Of course, in practical applications, the method may also be applied to a server, for example, the server receives an image to be processed sent by a terminal device, and determines a region to be blurred corresponding to the image to be processed based on a preset focus; determining a blurring radius corresponding to each first pixel point in the region to be blurred; determining an image to be blurred corresponding to the first pixel point in a plurality of preset images to be blurred according to the blurring radius of the first pixel point; respectively carrying out blurring processing on each image to be blurred to obtain a plurality of blurred images so as to obtain blurred images; and finally, sending the obtained virtual image to the terminal equipment so that the terminal equipment can display the virtual image.

As shown in fig. 1, the present implementation provides an image blurring method, which may include the steps of:

and S10, determining the area to be blurred corresponding to the image to be processed based on the preset focus.

Specifically, the image to be processed may be an image captured by an imaging module, where the imaging module includes at least two imagers, and the two imagers are a main imager and an auxiliary imager, respectively. The main imager and the auxiliary imager are arranged on the same plane, and the main imager and the auxiliary imager can be transversely adjacently arranged together or vertically adjacently arranged. The primary imager and the secondary imager may be dual cameras of an electronic device (e.g., a smartphone), i.e., both the primary imager and the secondary imager are cameras. For example, the main imager and the auxiliary imager may be dual rear cameras or dual front cameras, wherein the main imager and the auxiliary imager may be one color imager and the other black and white imager (e.g., the main imager is a color imager and the auxiliary imager is a black and white imager), and the main imager and the auxiliary imager may also be imagers with different focal lengths, and of course, the main imager and the auxiliary imager may also be the same imager. Of course, the imaging module may further include 3 imagers (e.g., a smartphone having three cameras, etc.), and may also include 4 imagers, etc.

Further, the image to be processed may be an image to be processed acquired by an imaging module configured in the electronic device itself, or an image to be processed acquired by an imaging module of another electronic device through a network, bluetooth, infrared, or the like. In a specific implementation manner of this embodiment, the image to be processed is obtained by shooting through an imaging module configured in the electronic device itself, and the image to be processed is obtained by shooting through a main imager of the imaging module. It is understood that the electronic device is configured with an imaging module configured with at least a main imager for capturing a main image and an auxiliary imager for capturing an auxiliary image, wherein the main image is used as a to-be-processed image and the auxiliary image is used for assisting in calculating depth information of the to-be-processed image.

Further, the preset focus is a focus position of the image to be processed, and the preset focus may be automatically generated according to the acquired image to be processed, may also be generated according to a selection operation of a user, and may also be sent by an external device. For example, when an image to be processed is displayed in the imaging device, a click operation performed by a user on the image to be processed may be received, a click point of the click operation may be acquired as a preset focus, and position information of the click point (for example, a pixel position corresponding to a corresponding pixel point of the click point on the display interface, such as (125,150) or the like) may be used as position information of the preset focus.

In an implementation manner of this embodiment, the preset focal point is automatically generated according to the acquired image to be processed, where the preset focal point may be determined according to an image center of the preview image, or may be determined according to a face image in the preview image. The process of determining the preset focus according to the image center of the preview image may be: when an image to be processed is acquired, acquiring an image center point of the image to be processed, and taking the image center point as a preset focus corresponding to the image to be processed. In addition, the process of determining the preset focus according to the face image in the image may be: when an image to be processed is acquired, detecting that the image to be processed carries a face image; if the human face image is not carried, taking the image center point as a preset focus; if a face image is carried, taking a pixel point in the face image as a preset focus (for example, a pixel point corresponding to a nose tip, or a central point of the face image, etc.); if the face image is carried with a plurality of face images, the face image with the largest image area occupied by the face image in the plurality of face images is selected as a target face image, and a pixel point in the target face image is used as a preset focus (for example, a pixel point corresponding to a left eyeball and the like). Of course, in practical applications, after the focus is automatically generated according to the image to be processed, the user may also manually set the preset focus, where the priority of the manually set preset focus is higher than the priority of the automatically generated preset focus according to the image to be processed. It can be understood that, when the preset focus is set manually, the imaging device does not perform any more actions of automatically generating the preset focus from the image to be processed; when the preset focus is automatically generated, the imaging apparatus may update the automatically generated preset focus according to a manually set preset focus.

In an implementation manner of this embodiment, before determining, based on the preset focus, a region to be blurred corresponding to the image to be processed, the method includes:

a10, determining a candidate region in the image to be processed based on a preset focus;

and A20, performing correction processing on the preset focus corresponding to the candidate area, and taking the corrected preset focus as the preset focus.

Specifically, the candidate region is an image region of the image to be processed, and the candidate region may include the preset focus. It can be understood that the candidate region is an image region including a preset focus, the image region is an image region in the image to be processed, for example, after the preset focus is obtained, a circular region is drawn with the preset focus as a center of a circle and a preset radius (for example, 20 pixels) as a radius, and an intersection region of the image to be processed in the circular region is an image region corresponding to the preset focus; for another example, the image to be processed is divided into an image area a, an image area B and an image area C in advance, and when it is detected that the user clicks on the image area B, the image area B can be used as the image area corresponding to the preset focus. In addition, the candidate region may be a square region, a rectangular region, a circular region, a triangular region, or the like, which is centered on the preset focus.

In a possible implementation manner of this embodiment, the candidate region is a square region with a preset focus as a center, where a side length of the square region may be determined according to a width and a height of the image to be processed, for example, the side length of the square is a ratio of a minimum value of the width and the length of the image to be processed to a preset threshold, that is, a side length L of the squaresMin (w, h)/d, where w is the width of the depth map corresponding to the image to be processed, h is the height of the depth map corresponding to the image to be processed, and d is a preset threshold, for example, d is 24.

Further, in an implementation manner of this embodiment, the performing, based on the candidate region and corresponding to the preset focus, a correction process specifically includes:

b10, dividing the candidate area into a plurality of sub candidate areas, and respectively obtaining the depth mean value corresponding to each sub candidate area;

and B20, determining a target area corresponding to the preset focus according to all the obtained depth mean values, and taking the area center corresponding to the target area as the preset focus after the correction processing.

Specifically, in the step B110, any two sub-candidate regions may not overlap or may partially overlap. The dividing process of the candidate area comprises the following steps: and then, dividing the area of the candidate area except the first sub-area into at least two sub-areas, and taking each sub-area as one sub-candidate area to obtain a plurality of sub-candidate areas corresponding to the candidate area.

For example, the following steps are carried out: setting a preset focus as (x, y), setting the size of an image to be processed as (w, h), wherein the candidate region is a square region taking the preset focus as the center, and the side length of the candidate region is LsThe four vertices of the candidate region are respectively:andfirstly, selecting a side length L by taking a preset focus as a centersFirst candidate region S of/25Then equally dividing the candidate region into 4 second sub-candidate regions (S)1,S2,S3,S4) And any two of the 4 second sub-candidate regions are symmetrical, wherein if the two second sub-candidate regions are arranged side by side left and right, the two second sub-candidate regions are symmetrical in the vertical direction, and if the two second sub-candidate regions are arranged side by side up and down, the two second sub-candidate regions are symmetrical in the horizontal direction, so as to obtain 5 sub-candidate regions as shown in fig. 4, wherein the side lengths of the 5 sub-candidate regions are all Ls/2,S1The four vertex coordinates of the sub-candidate region areS2The four vertex coordinates of the sub-candidate region are S3The four vertex coordinates of the sub-candidate region are S4The four vertex coordinates of the sub-candidate region are And S5The four vertex coordinates of the sub-candidate region are

Further, the depth mean value is an average value of depth information corresponding to each pixel point in the sub-candidate region, wherein the depth information is used for representing a distance difference between an actual scene and an imaging module in an actual shooting scene. In this embodiment, the depth information may be a value obtained by normalizing the distance difference between the actual scene and the imaging module in the actual shooting scene to 0 to 255. In addition, the depth information corresponding to each pixel point in the sub candidate region may be determined according to the depth information of the image to be processed. Therefore, when the image to be processed is acquired, the depth information of the image to be processed needs to be acquired, wherein the depth information of the image to be processed can be determined according to an auxiliary image acquired by an auxiliary imager in the imaging module. It can be understood that when the main imager of the imaging module acquires the image to be processed, the auxiliary imager synchronously acquires the auxiliary image and calculates the depth information corresponding to the image to be processed based on the acquired auxiliary image and the image to be processed.

Further, the depth information of the image to be processed refers to a matrix formed by depth information corresponding to each pixel point in the image to be processed, wherein the depth information corresponding to the pixel point is the distance from the pixel point to the plane where the main imager and the auxiliary imager are located, and the position of each depth information in the matrix is the same as the position of the pixel point corresponding to the depth information in the image to be processed. For example, the position of the depth information in the matrix is (10,20), and then the position of the pixel point corresponding to the depth information in the image to be processed is (10, 20).

Further, the process of calculating the depth information corresponding to the image to be processed based on the acquired auxiliary image and the image to be processed may be: and for each pixel point in the image to be processed, determining the depth information of the pixel point based on the image to be processed and the auxiliary image, and after the depth information corresponding to all the pixel points is obtained, arranging the depth information into a matrix according to the position of each pixel point in the image to be processed so as to obtain the depth information of the image to be processed. The depth information of the pixel points can be realized according to the distance of the triangular distance measurement. This is because the image to be processed and the auxiliary image are acquired by the main imager and the auxiliary imager respectively, and a certain distance is left between the main imager and the auxiliary imager, thereby causing parallax. Therefore, the depth information of the same object in the image to be processed and the auxiliary image can be obtained through triangulation distance-finding calculation, namely the distance between the object and the plane where the main imager and the auxiliary imager are located, for example, the distance between the pixel point A and the plane where the main imager and the auxiliary imager are located is 50, and then the depth information of the pixel point is 50.

In addition, in practical applications, in order to reduce the amount of calculation when calculating the depth information, after the to-be-processed image and the auxiliary image are acquired, the to-be-processed image and the auxiliary image may be respectively reduced according to a predetermined ratio, and the to-be-processed image obtained by reduction may be used as the to-be-processed image and the auxiliary image obtained by reduction may be used as the auxiliary image. For example, the image to be processed and the auxiliary image are respectively reduced by a preset multiple (e.g., 2 times), or the image to be processed and the auxiliary image are respectively reduced to the image size to be processed (e.g., 224 × 224, etc.), and so on. Meanwhile, after the depth information is obtained, the depth information corresponding to each pixel point in the image to be processed may be respectively used as the pixel value of each pixel point (for example, the depth information corresponding to the pixel point a is used as the pixel value of the pixel point a, and the depth information corresponding to the pixel point B is used as the pixel value of the pixel point B), so as to obtain the depth map corresponding to the image to be processed. In addition, after determining the depth map corresponding to the image to be processed, the depth map may be preprocessed to improve uniformity and edge smoothness of the depth map. The preprocessing may be a filtering process, and the like, wherein the filtering process may include weighted least squares filtering, adaptive median filtering, and the like.

Further, in an implementation manner of this embodiment, in order to speed up the computation process of the depth mean of the sub-candidate regions, after the depth information of the image to be processed is acquired, and a depth map is obtained based on the depth information of the image to be processed, the determination process of the candidate regions and the sub-candidate regions may be performed in the depth map (for example, as shown in fig. 2 and 3), so that the candidate regions in the image to be processed are selected and transplanted onto the depth map, and the depth information corresponding to the candidate regions and the sub-candidate regions may be directly determined according to the positions of the candidate regions in the depth map, so that the process of determining the depth information corresponding to the preset candidate regions based on the depth map may be omitted, and the computation speed of the depth information of the candidate regions and each sub-candidate region is increased.

Further, in the step B20, after the depth mean values corresponding to the sub-candidate regions are obtained, the depth mean values are compared respectively to select a maximum depth mean value of the depth mean values, and the sub-candidate region corresponding to the selected maximum depth mean value is used as a target region corresponding to the preset focus. It can be understood that the target region is one of the sub-candidate regions corresponding to the preset focus, where the depth mean of the sub-candidate region is the sub-candidate region with the largest depth mean among all the sub-candidate regions. In addition, after the target area is determined, the area central point of the target area is obtained, and the area central point is used as a preset focus to correct the preset focus, so that the problem that when the preset focus is close to the foreground edge of the image to be processed and the foreground area has holes, the depth of field of the preset focus is calculated incorrectly is avoided, and the accuracy of the area to be blurred corresponding to the image to be processed is improved. Of course, in practical applications, when determining the sub-candidate regions, the candidate region corresponding to the preset focus is obtained, so that when determining the target region, the candidate region may also be used as a sub-candidate region of the candidate region, where the sub-candidate region corresponding to the preset focus includes the candidate region and each sub-candidate region obtained by dividing the candidate region.

Further, after the preset focus is determined, the depth information corresponding to the preset focus is obtained, the depth information corresponding to the preset focus is used as a depth threshold value for determining a region to be blurred, and the image to be processed is divided into a foreground region and a region to be blurred according to the depth threshold value. The process of dividing the image to be processed into the foreground region and the region to be blurred based on the depth threshold may be:

acquiring a target pixel point of which the depth information is smaller than the depth threshold value in the image to be processed;

and determining an image area formed by the target pixel point, and taking the image area as a to-be-blurred area corresponding to the to-be-processed image.

Specifically, the obtaining of the pixel point of which the depth information is smaller than the depth threshold in the image to be processed means that for each pixel point in the image to be processed, the depth information of the pixel point is determined according to the depth information corresponding to the image to be processed, the depth information of the pixel point is compared with the depth threshold, and if the depth information of the pixel point is smaller than the depth threshold, the pixel point is obtained, for example, the position information of the pixel point is recorded. In addition, after all the pixel points with the depth information smaller than the preset depth threshold are obtained, the region formed by all the obtained pixel points is taken as a region to be blurred, and the region formed by the unselected pixel points in the image to be processed is taken as a foreground region, for example, as shown in fig. 5, the region where girls are located in the image is a foreground region, and all the regions except the region where girls are located in the image are regions to be blurred.

Further, in an implementation manner of this embodiment, after the depth information of the preset focus is obtained, the depth information of the preset focus may be corrected, and the region to be blurred is determined by using the corrected depth information. Correspondingly, the determining the to-be-blurred region corresponding to the to-be-processed image based on the preset focus specifically includes:

s11, calculating the difference value between first depth information corresponding to each second pixel point and second depth information corresponding to the preset focus point for each second pixel point in the image to be processed to obtain a first difference value set;

s12, setting all negative difference values in the first difference value set to be zero so as to update the first difference value set;

s13, correcting the second depth information of the preset focus according to the updated first difference set to obtain corrected second depth information;

s14, using the corrected second depth information as the second depth information of the preset focus, and using the corrected second depth information as the second depth information of the preset focus;

and S15, determining the area to be blurred corresponding to the image to be processed according to the second depth information.

Specifically, a pixel point in the image to be processed is recorded as a second pixel point, and a pixel point in the region to be blurred is recorded as a first pixel point. It can be understood that each first pixel point is one of all the second pixel points, and the pixel point set formed by the second pixel points includes the pixel point sets formed by all the first pixel points. The difference value between the first depth information corresponding to the second pixel point and the second depth information corresponding to the preset focus point refers to a depth information difference value obtained by subtracting the second depth information from the first depth information, wherein the difference value may be a positive number, a negative number, or zero.

Further, the first difference set includes differences corresponding to the second pixel points, and the number of the differences included in the first difference set is equal to the number of the second pixel points. After the first difference set is obtained, all the differences smaller than zero in the first difference set are selected, namely all the negative number differences in the first difference set are selected, and all the selected negative number differences are replaced by zero to update the first difference set. It will be appreciated that after the first set of difference values is obtained, each negative number difference value in the first set of difference values is replaced with a zero, such that only zeros and positive numbers are included in the first set of difference values. In addition, after the updated first difference set is obtained, a correction value corresponding to the second depth information is determined based on the first difference set, the second depth information is corrected by the correction value to obtain corrected second depth information, and finally the corrected second depth information is collected to serve as a depth threshold value to determine the area to be blurred, so that the depth threshold value can accurately represent the depth information of the foreground area, and the accuracy of selecting the area to be blurred is improved.

In one implementation of the present embodiment, after the updated first difference set is determined, a threshold search method may be used to determine a correction value and the like corresponding to the second depth information. For example, firstly, setting a pixel value of each second pixel point in the image to be processed as a target pixel value to obtain a difference map a, wherein the target pixel value is a difference value corresponding to the second pixel point in the updated first difference set; secondly, counting a gray level histogram of the difference value graph A, wherein the gray level histogram accords with double peak distribution, and the double peak distribution means that a frequency distribution curve corresponding to the gray level histogram has two peaks; and finally, calculating a gray value corresponding to a trough between two peaks of the histogram through threshold search, and taking the gray value as a correction value. Of course, in practical applications, there are a plurality of threshold value search methods, which are not described here, so that all the threshold value search methods that can obtain the correction value can be adapted to the present application, and are not limited here.

Further, after second depth information of a preset pixel point is determined, the second depth information is used as a depth threshold corresponding to the to-be-blurred region, and the to-be-blurred region and the foreground region are obtained by segmenting the to-be-blurred region based on the second depth information. And the depth information of each pixel point in the foreground area is greater than or equal to the second depth information, and the depth information of each pixel point in the area to be blurred is less than the second depth information. Therefore, the specific process of determining the to-be-blurred region corresponding to the to-be-processed image according to the second depth information may be: for each pixel point in the image to be processed, determining depth information of the pixel point according to the depth information corresponding to the image to be processed, comparing the depth information of the pixel point with second depth information, if the depth information of the pixel point is smaller than a depth threshold, acquiring the pixel point (for example, recording position information of the pixel point, and the like), after acquiring all the pixel points of which the depth information is smaller than a preset depth threshold, taking an area formed by all the acquired pixel points as a to-be-blurred area, and taking an area formed by unselected pixel points in the image to be processed as a foreground area.

And S20, determining the corresponding blurring radius of each first pixel point in the region to be blurred.

Specifically, the blurring radius is a radius of a blurring kernel of blurring processing corresponding to the first pixel point, and blurring degrees of blurring kernels corresponding to different blurring radii are different. For example, the larger the blurring radius is, the larger the blurring degree of the blurring kernel is, whereas the smaller the blurring radius is, the smaller the blurring degree of the blurring kernel is. The blurring processing kernel can be a defocusing blurring kernel or a Gaussian blurring kernel. And when the blurring processing kernel is a Gaussian blurring kernel, Gaussian filtering is carried out on the first pixel point by adopting a Gaussian blurring algorithm. In addition, the virtual radius corresponding to each first pixel point is determined according to the third depth information of the first pixel point and the second depth information of the preset focus. Therefore, the virtualization radius corresponding to each first pixel point can be made, and the virtualization radius corresponding to each first pixel point is adopted to perform virtualization processing on the first pixel point, so that point-by-point filtering of the to-be-virtualized area is realized, and the virtualization effect of the to-be-virtualized area is improved.

In an implementation manner of this embodiment, for each first pixel point in the region to be virtualized, the determining the virtualization radius corresponding to the first pixel point specifically includes:

for each first pixel point in the region to be blurred, calculating a difference value between third depth information of the first pixel point and second depth information of the preset focus;

and determining the virtualization radius corresponding to the difference value according to the corresponding relation between the preset virtualization radius and the difference value so as to obtain the virtualization radius corresponding to the first pixel point.

Specifically, the third depth information may be determined according to depth information corresponding to the image to be processed, after the depth information corresponding to the image to be processed is obtained, for each first pixel point, the depth information corresponding to the first pixel point is searched for in the depth information of the image to be processed, and the depth information obtained by the difference is used as the third depth information of the first pixel point. The difference is an absolute value of a difference between the third depth information of the first pixel and the second depth information of the preset focus, for example, if the third depth information is a and the second depth information is B, the difference is | a-B |, that is, if a > B, the difference between the third depth information and the second depth information is a-B; if A < B, the difference value between the third depth information and the second depth information is B-A; if a is equal to B, the difference between the third depth information and the second depth information is 0.

Further, the preset corresponding relationship between the blurring radius and the difference value is preset, and is used for calculating the blurring radius corresponding to each first pixel point according to the difference value corresponding to each first pixel point. It can be understood that, for each first pixel point, after the difference value corresponding to the first pixel point is obtained through calculation, the virtualization radius corresponding to the difference value can be determined according to the corresponding relationship, that is, the virtualization radius corresponding to the first pixel point is determined.

In an implementation manner of this embodiment, the preset corresponding relationship between the blurring radius and the difference value is:

when the difference value is zero, the blurring radius corresponding to the difference value is zero;

when the difference value is larger than or equal to a first blurring radius threshold value corresponding to the image to be processed, the blurring radius corresponding to the difference value is the first blurring radius threshold value;

when the difference is greater than zero and the difference is less than the first virtualisation radius threshold; the blurring radius corresponding to the difference is determined according to the difference.

Specifically, the first blurring radius threshold is used for an upper limit value of a blurring radius corresponding to each first pixel point, and the first blurring radius threshold is determined according to a preset blurring strength, a preset default blurring strength, or a received blurring instruction. It is understood that the blurring strength may be a blurring strength input by the user, a default blurring strength configured by the imaging apparatus itself, a blurring strength transmitted by an external device, a blurring strength set by the user, or the like. Further, the blurring process is divided into 100 levels in advance and identified by a natural number of 1 to 100, and then the blurring strength is one of the 100 levels, that is, one of the blurring strengths is one of the natural numbers of 1 to 100. Wherein, the higher the virtualization level corresponding to the virtualization emphasis, the higher the extent of the region to be virtualized is, and conversely, the lower the virtualization level corresponding to the virtualization emphasis, the lower the extent of the region to be virtualized is.

Further, when the difference is greater than zero and less than the first blurring radius threshold, the blurring radius and the difference may be in a proportional relationship (e.g., a linear relationship, etc.), and the blurring radius is calculated according to the proportional relationship and the difference. For example, the blurring radius is obtained by multiplying the difference by a predetermined scaling factor and then rounding. In practical applications, the proportional relationship between the blurring radius and the difference may be determined according to practical requirements, and different imaging devices may be configured with different proportional relationships, for example, the blurring radius is equal to the difference, or the blurring radius is equal to a difference of two and a half. It should be noted that, when the difference is greater than zero and smaller than the first virtualization radius threshold, the virtualization radius determined according to the difference is greater than zero and smaller than the first virtualization radius threshold, so that the virtualization process can be performed on the first pixel point, and the virtualization degree corresponding to the first pixel point is not greater than the preset virtualization intensity.

And S30, determining the image to be blurred corresponding to the first pixel point in a plurality of preset images to be blurred according to the blurring radius of the first pixel point.

Specifically, the image to be blurred is an image generated according to the image to be processed and used for performing blurring processing. The images to be blurred in the images to be blurred have different image scales, for example, the images to be blurred include an image a to be blurred and an image B to be blurred, the image scale of the image a to be blurred is 244 × 244, and the image scale of the image B to be blurred is 122 × 122. In an implementation manner of this embodiment, the images to be blurred include a first image to be blurred and a second image to be blurred, where the first image to be blurred is an image of a quarter of an image scale of the image to be processed; the second image to be blurred is an image of one-half of the image scale of the image to be processed.

Furthermore, each image to be blurred in the plurality of blurred images is determined by down-sampling the image to be processed, and the down-sampling degrees corresponding to each image to be blurred are different, so that the image scales corresponding to each image to be blurred are different, and the image scale of each image to be blurred is smaller than the image scale of the image to be processed. For example, the images to be blurred comprise an image a to be blurred and an image B to be blurred, wherein the image a to be blurred is obtained by down-sampling the image to be processed with the step length of 1, so that the image scale of the image a to be blurred is one half of that of the image to be processed; and the image B to be blurred is obtained by down-sampling the image B to be processed with the step length of 2, so that the image scale of the image B to be blurred is one fourth of that of the image B to be processed. Each image to be blurred is determined by down-sampling in this way, so that each blurred image includes image details of the image to be processed.

Further, the blurring radius corresponding to each image to be blurred is different in the images to be blurred, and for any two images to be blurred in the images to be blurred, if the image scale of the first image to be blurred is larger than that of the second image to be blurred, the blurring radius corresponding to the first image to be blurred is smaller than the blurring radius corresponding to the second image to be blurred. It can be understood that the blurring radius corresponding to each of the images to be blurred is different, and after the blurring radius corresponding to each first pixel point is determined, the image to be blurred corresponding to the first pixel point can be determined according to the blurring radius. For example, each image to be blurred corresponds to a blurring radius region, the blurring radius sections corresponding to the images to be blurred do not overlap with each other, and the blurring radius section corresponding to each image to be blurred constitutes [0, a first blurring radius threshold ], so that the image to be blurred corresponding to any value in [0, the first blurring radius threshold ] can be determined.

For example, the following steps are carried out: assume that the images to be blurred include an image a to be blurred and an image B to be blurred, the blurring radius interval corresponding to the image a to be blurred is [0, a second blurring radius threshold ], and the blurring radius interval corresponding to the image B to be blurred is [ a second blurring radius threshold, a first blurring radius threshold ], where the second blurring radius threshold is greater than 0 and smaller than the first blurring radius threshold. Then, when the blurring radius corresponding to the first pixel point is smaller than the second blurring radius threshold, the image to be blurred corresponding to the first pixel point is the image to be blurred A; and when the blurring radius corresponding to the first pixel point is larger than or equal to the second blurring radius threshold, the image to be blurred corresponding to the first pixel point is the image to be blurred B.

S40, performing blurring processing on each image to be blurred respectively to obtain a plurality of blurred images;

specifically, the blurring processing may be performed on the to-be-blurred region by using a gaussian blurring algorithm, or may also be performed by using a defocus blurring algorithm. When a plurality of images to be blurred are obtained, the corresponding relationship between the image scale of the image to be blurred and the image scale of the image to be blurred is obtained, and the blurring radius corresponding to the first pixel point is adjusted according to the corresponding relationship, for example, the image scale of the image to be blurred is half of the image scale of the image to be blurred, and then the adjusted blurring radius is half of the blurring radius before adjustment.

In an implementation manner of this embodiment, for each first pixel, a corresponding blurring kernel may be determined according to a corresponding blurring radius, where the blurring kernel includes a defocus blurring kernel and a gaussian blurring kernel. The process of determining the corresponding blurring processing kernel according to the blurring radius corresponding to the first pixel point may be: when the blurring radius is larger than or equal to a preset threshold value, the blurring processing kernel corresponding to the pixel point is a defocusing blurring kernel; when the blurring radius is smaller than a preset threshold, the blurring kernel corresponding to the pixel point is a gaussian kernel, where the preset threshold may be a setting used for determining a basis of the blurring kernel corresponding to each first pixel point, for example, the preset threshold is 3. In this embodiment, the gaussian blur kernel is used for the first pixel point with the blurring radius smaller than the preset threshold, so that blurring smooth transition with a small blurring degree can be realized by adjusting the radius and variance of the gaussian blur kernel, and the problem that blurring degree transition is more abrupt when the radius of the defocus blur kernel is smaller in defocus blur is avoided.

Further, in an implementation manner of this embodiment, the performing the blurring processing on each image to be blurred respectively to obtain a plurality of blurred images specifically includes:

s41, determining a mask image corresponding to each image to be blurred, wherein in the image to be masked, the pixel values of the pixel points in the area to be blurred are first preset pixel values, the pixel values of the pixel points in the foreground area are second preset pixel values, and the foreground area is an image area except the area to be blurred in the image to be processed; and performing blurring processing on the image to be blurred based on the image to be blurred and the mask image corresponding to the image to be blurred to obtain a blurred image corresponding to the image to be blurred.

Specifically, the mask image is a mask image corresponding to the image domain to be blurred, that is, in the mask image, the pixel value of the pixel point in the region to be blurred is a first preset pixel value, the pixel value of the pixel point in the foreground region is a second preset pixel value, and the foreground region is an image region in the image to be processed except for the region to be blurred, where in this embodiment, the first preset pixel value may be 255, 1, and the like. The second preset pixel value may be 0, etc. In addition, the blurring processing on each image to be blurred may be, for each image to be blurred, performing a preset operation on the image to be blurred and the mask image, taking an image obtained through the preset operation as the image to be blurred to update the image to be blurred, and finally blurring the updated image to be blurred to obtain a blurred image corresponding to each image to be blurred. Since the mask image is a mask image of a background region of the image to be blurred, and the foreground region acquires a second preset pixel value (for example, 0), the influence of the foreground region on a blurring result can be avoided when blurring the pixel point in a transition region between the background region and the foreground region, and the blurring effect is improved.

In an implementation manner of this embodiment, the blurring the image to be blurred based on the image to be blurred and the mask image corresponding to the image to be blurred, and obtaining the blurred image corresponding to the image to be blurred specifically includes:

for each first pixel point in the image to be virtualized, acquiring a first image area corresponding to the virtualization processing core of the first pixel point in the mask image and a second image area in the image to be virtualized; updating the pixel value of each pixel point in the second image area according to the pixel value of each pixel point in the first image area to obtain an updated image to be blurred; and performing blurring processing on the updated image to be blurred to obtain a blurred image corresponding to the image to be blurred.

Specifically, for each pixel point in the image to be virtualized, a first image region corresponding to the virtualization processing kernel of the pixel point is determined in the mask image according to the virtualization radius corresponding to the pixel point, and a second image region corresponding to the virtualization processing kernel of the pixel point is determined in the image to be virtualized, wherein the region size of the first image region is the same as that of the second image region, and for each pixel point a in the first image region, the pixel point B in the second image region is the same as the pixel point a in the pixel point a. For example, the pixel position of the pixel point a in the first image region is (100 ), and then the pixel position of the pixel point B in the second image region is (100 ).

Further, after the first image region and the second image region are obtained, updating each pixel point in the second image region based on each pixel point in the first image region, wherein the updating refers to preprocessing each pixel point in the first image region and each pixel point in the second image region respectively, and taking a pixel value obtained by preprocessing as a pixel value of each pixel point in the second image region, wherein the preprocessing may be a product operation or an and operation. For example, for each pixel point B in the second image region, a pixel point a corresponding to the pixel point B is selected in the first image region, the pixel value of the pixel point B and the pixel value of the pixel point a are preprocessed, and the preprocessed pixel value is used as the updated pixel value of the pixel point B, where the preprocessing may be a product operation or an and operation. Certainly, in practical applications, values of each pixel point of the region to be blurred in the mask image may be different according to different preprocessing modes, for example, when the preprocessing is a product operation, the value of each pixel point of the region to be blurred in the mask image may be 1; when the preprocessing is and operation, the value of each pixel point of the region to be blurred in the mask image may be 255.

For example, the following steps are carried out: assuming that the preprocessing is a product operation, the pixel value of each pixel point in the region to be virtualized in the mask image is 1, and the pixel value of each pixel point in the foreground region is 0; determining a second image area in the image to be blurred according to a pixel point A in the area to be blurred in the image to be blurred, wherein the second image area comprises an area A and an area B, the area A is contained in the area to be blurred, and the area B is contained in the foreground area; a second image area determined in the mask image according to the pixel point includes an area C and an area D, the area C includes a mask area corresponding to the foreground area for a mask area corresponding to the area to be blurred, and when the first image area overlaps with the second image area, the area a overlaps with the area C, and the difference B overlaps with the area D; then, for any pixel point a in the area a, a pixel point C with the same position information as the pixel point a exists in the area C; for any pixel point B in the region B, a pixel point D with the same position information as the pixel point B exists in the region D; when each pixel point in the first image area is adopted to update each pixel point in the second image area, the product operation is carried out on the pixel point a in the area A and the corresponding pixel point C in the area C, and the pixel value of the pixel point a is kept unchanged; and performing product operation on the pixel point B in the region B and the pixel point D corresponding to the pixel point B in the region D, wherein the pixel device of the pixel point B is changed into 0, namely the pixel value of each pixel point in the region B is changed into 0 in a second image region obtained by updating each pixel point in the second image region based on each pixel point in the first image region.

Therefore, after the pixel values of the pixels in the second image region are updated according to the pixel values of the pixels in the first image region, when the image region contained in the foreground region exists in the second image region, the pixel values of the pixels in the image region are updated to be zero, so that the influence of the pixels close to the foreground region on the pixels in the foreground region during filtering can be avoided, and the problem that the edges of the foreground region and the region to be blurred generate halo is avoided.

Further, the blurring processing may be defocus blurring processing applied to all the pixel points, that is, the blurring processing kernel corresponding to each pixel point is a defocus blurring kernel; the blurring processing can also be gaussian blurring processing on all the pixel points, namely the blurring processing kernel corresponding to each pixel point is a gaussian blurring kernel; the blurring processing can also adopt defocusing blurring processing on part of pixel points, and gaussian blurring processing on part of pixel points, that is, blurring processing kernels corresponding to part of pixel points are defocusing blurring kernels, and blurring processing kernels corresponding to part of pixel points are gaussian blurring kernels. In a specific implementation manner of this embodiment, the blurring processing is to perform defocus blurring processing on part of the pixel points, and to perform gaussian blurring processing on part of the pixel points, where the blurring processing manner may be determined according to a radius of a blurring processing kernel, so that a blurring radius corresponding to each first pixel point may be made, and the blurring processing is performed on the first pixel point by using a blurring radius corresponding to each first pixel point, thereby implementing point-to-point filtering on the to-be-blurred region, and improving a blurring effect of the to-be-blurred region. For example, when the blurring radius is greater than or equal to a preset threshold, the blurring processing kernel corresponding to the pixel point is a defocus blur kernel; when the blurring radius is smaller than a preset threshold, the blurring kernel corresponding to the pixel point is a gaussian kernel, and the preset threshold is preset, for example, any integer between 3 and 5.

And S50, fusing the plurality of blurred images and the image to be processed to obtain a blurred image corresponding to the image to be processed.

Specifically, the plurality of blurring images correspond to the plurality of images to be blurred one by one, and the image scale of each blurring image is the same as that of the corresponding image to be blurred. The blurring image is a fusion image obtained by fusing a plurality of blurring images and an image to be processed, wherein a foreground region of the fusion image is the same as a foreground region of the image to be processed, and a background region of the fusion image is a background region obtained by a plurality of blurring images. It can be understood that, when fusing the plurality of blurred images and the to-be-processed image, the background regions of the plurality of blurred images may be fused, the fused image and the foreground region of the to-be-processed image are fused to obtain a fused image, and the fused image is used as the blurred image corresponding to the to-be-processed image, so that blurring of the to-be-processed image is achieved.

In an implementation manner of this embodiment, the fusing the plurality of blurred images and the to-be-processed image to obtain a blurred image corresponding to the to-be-processed image specifically includes:

taking a first virtualized image in a plurality of virtualized image sequences as a target image and taking a second virtualized image as a reference image, wherein the image scale of the first virtualized image is smaller than that of the second virtualized image;

adjusting the image scale of the target image to the image scale of a reference image, and fusing the adjusted target image and the reference image to obtain a fused image;

and taking the fused image as a target image, taking a third virtualized image in the virtualized image sequence as a reference image, and continuing to execute the step of adjusting the image scale of the target image to the image scale of the reference image until the reference image is an image to be processed, wherein the image scale of the third virtualized image is larger than the image scale of the second virtualized image.

Specifically, the image scales corresponding to the respective blurring images are different from each other, and the blurring images may be gradually fused according to the image scales to obtain the blurring image corresponding to the image to be processed. For example, in a specific implementation manner, the blurred images are arranged in order from small to large in image scale to obtain a blurred image sequence, and an image scale of a previous blurred image in two adjacent blurred images in the blurred image sequence is smaller than an image scale of a subsequent blurred image. For example, a blurred image a and a blurred image B, in which a blurred image a precedes a blurred image B in the sequence of blurred images, the image scale of the blurred image a is smaller than the image scale of the blurred image B.

Further, the fusing the adjusted target image and the reference image means fusing an image area corresponding to a to-be-blurred area of the adjusted target image and an image area corresponding to a to-be-blurred area in the reference image, and the foreground area in the reference image can be directly acquired for the foreground area, so that the fusing speed can be improved. Further, after the target image and the reference image are fused, the fused image may be subjected to smoothing processing, and the fusion after the smoothing processing may be made the target image. The process of smoothing the fused image may be: edge detection is carried out on the fused image obtained by fusion, a transition region between each blurring radius in the fused image is detected, and fine point-by-point blurring processing is carried out on the transition region by Gaussian blurring and the like, so that each blurring radius is in smooth transition. The edge detection may adopt an edge detection operator such as a laplacian operator.

When the foreground image is fused with the blurred background area, in order to avoid the abrupt or flickering edge of the foreground image and the blurred background area, when the foreground image is fused with the blurred background area, a transition zone can be generated at the segmentation boundary of the foreground image and the blurred background area, so as to reduce the flickering of previewing the blurred image. Correspondingly, in an implementation manner of this embodiment, the process of fusing the foreground region and the to-be-virtualized region after virtualization may be: corroding and Gaussian blurring the foreground area, generating a transition zone at the edge of the foreground area, wherein the pixel of each pixel point in the transition zone is between 0 and 255, and when the foreground area and the area to be virtualized after virtualization are obtained to be fused, updating the pixel value of each pixel point in the transition zone according to the pixel value of the pixel point, the value of the pixel point corresponding to the foreground area and the value of the pixel point corresponding to the area to be virtualized after virtualization, wherein the updated pixel value can be expressed as:

wherein w is the pixel value of the pixel point in the transition zone, p1Is the pixel value, p, of the pixel point in the foreground region2And the pixel value of the pixel point in the virtual region to be virtual after virtual.

In summary, an image blurring method, a storage medium, and a terminal device are provided, in which the method determines a to-be-blurred region corresponding to an image to be processed based on a preset focus; determining a blurring radius corresponding to each first pixel point in the region to be blurred; determining an image to be blurred corresponding to the first pixel point in a plurality of preset images to be blurred according to the blurring radius of the first pixel point; and respectively carrying out blurring processing on each image to be blurred to obtain a plurality of blurring images, and fusing the blurring images obtained by blurring processing and the image to be processed to obtain a blurring image corresponding to the image to be processed. According to the invention, point-by-point filtering is adopted for each pixel point, and different pixel points are processed on the to-be-blurred images with different image scales, so that the to-be-blurred images are reduced by multiple scales for blurring, and the blurring efficiency is improved.

Based on the above image blurring method, the present embodiment provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps in the image blurring method according to the above embodiment.

Based on the image blurring method, the present invention further provides a terminal device, as shown in fig. 6, which includes at least one processor (processor) 20; a display screen 21; and a memory (memory)22, and may further include a communication Interface (Communications Interface)23 and a bus 24. The processor 20, the display 21, the memory 22 and the communication interface 23 can communicate with each other through the bus 24. The display screen 21 is configured to display a user guidance interface preset in the initial setting mode. The communication interface 23 may transmit information. The processor 20 may call logic instructions in the memory 22 to perform the methods in the embodiments described above.

Furthermore, the logic instructions in the memory 22 may be implemented in software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product.

The memory 22, which is a computer-readable storage medium, may be configured to store a software program, a computer-executable program, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 20 executes the functional application and data processing, i.e. implements the method in the above-described embodiments, by executing the software program, instructions or modules stored in the memory 22.

The memory 22 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, the memory 22 may include a high speed random access memory and may also include a non-volatile memory. For example, a variety of media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, may also be transient storage media.

In addition, the specific processes loaded and executed by the storage medium and the instruction processors in the terminal device are described in detail in the method, and are not stated herein.

Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

24页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:可变宽度的人像精细抠图方法、装置、设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!