Image generation method and device, electronic equipment and computer readable medium

文档序号:1954812 发布日期:2021-12-10 浏览:15次 中文

阅读说明:本技术 图像生成方法、装置、电子设备和计算机可读介质 (Image generation method and device, electronic equipment and computer readable medium ) 是由 王晨宇 于 2021-01-22 设计创作,主要内容包括:本公开的实施例公开了图像生成方法、装置、电子设备和计算机可读介质。该方法的一具体实施方式包括:对待处理图像的像素进行分类,确定至少一种像素类型;通过上述至少一种像素类型确定至少一种类型区域;对于上述至少一种类型区域中的类型区域,将该类型区域的颜色调整至目标颜色,得到对应该类型区域的目标颜色区域;将对应上述至少一种类型区域的至少一个目标颜色区域组合成目标图像。该实施方式提高了目标图像颜色的平滑性,同时也有利于减少噪声。(The embodiment of the disclosure discloses an image generation method, an image generation device, an electronic device and a computer readable medium. One embodiment of the method comprises: classifying pixels of an image to be processed, and determining at least one pixel type; determining at least one type area through the at least one pixel type; for the type area in the at least one type area, adjusting the color of the type area to a target color to obtain a target color area corresponding to the type area; and combining at least one target color region corresponding to the at least one type of region into a target image. This embodiment improves the smoothness of the target image color while also contributing to noise reduction.)

1. An image generation method, comprising:

classifying pixels of an image to be processed, and determining at least one pixel type;

determining at least one type area by the at least one pixel type;

for the type area in the at least one type area, adjusting the color of the type area to a target color to obtain a target color area corresponding to the type area;

and combining at least one target color region corresponding to the at least one type of region into a target image.

2. The method of claim 1, wherein the classifying pixels of the image to be processed to determine at least one pixel type comprises:

for the pixel of the image to be processed, importing the pixel into a preset full convolution network to obtain the probability that the pixel belongs to each appointed pixel type in an appointed pixel type set, and setting the appointed pixel type with the maximum probability value as the pixel type of the pixel, wherein the appointed pixel type set comprises: a solid color type, a progressive type, a highlight type, a shadow type, a texture type, and a noise type.

3. The method of claim 2, wherein said determining at least one type region from said at least one pixel type comprises:

setting an image area composed of the pixels of the solid type and the progressive type as a variation area;

setting an image area composed of the highlight type and shadow type pixels as a stable area;

setting an image area composed of pixels of the highlight type, the shadow type and the texture type as an adjustment area;

and setting a deletion area for an image area formed by the pixels of the noise type.

4. The method according to claim 3, wherein the adjusting, for a type region of the at least one type region, the color of the type region to a target color, resulting in a target color region corresponding to the type region, comprises:

and in response to the fact that the type area is a change area and the target color is a single color, adjusting the color of the type area to the single color to obtain a first target color area.

5. The method according to claim 3, wherein the adjusting, for a type region of the at least one type region, the color of the type region to a target color, resulting in a target color region corresponding to the type region, comprises:

responding to the fact that the type area is a change area and the target color is multicolor, and performing color clustering on the type area to obtain at least one color block;

determining a proximity relationship between the at least one color patch;

setting the color of a target image which is in the preset adjacent matching image library and has the same adjacent relation as the target color of the type area to obtain a second target color area; the target image includes the multiple colors.

6. The method of claim 5, wherein said determining a proximity relationship between said at least one color patch comprises:

generating a boundary line of the area to be processed in response to the changed area comprising the area to be processed composed of the pixels of the pure color type;

determining a communication area between the color blocks through the boundary line;

determining a proximity relationship between the at least one color patch based on the connected regions.

7. The method according to claim 3, wherein the adjusting, for a type region of the at least one type region, the color of the type region to a target color, resulting in a target color region corresponding to the type region, comprises:

in response to the type area being an adjustment area, extracting texture features of the adjustment area;

and setting the color of a target texture image which is in a preset texture image library and has the same texture characteristics as the target color of the type area to obtain a third target color area.

8. The method according to claim 3, wherein the adjusting, for a type region of the at least one type region, the color of the type region to a target color, resulting in a target color region corresponding to the type region, comprises:

responding to the type area as a deleted area, and fitting the type area to obtain a fitted image area;

and superposing the fitting image area and the type area to obtain a fourth target color area.

9. The method of claim 8, wherein said fitting the type region results in a fitted image region comprising:

acquiring the noise probability of each pixel in the type area to obtain a noise probability matrix corresponding to the type area;

accumulating the noise probability in the noise probability matrix to obtain a noise probability field;

calculating the image gradient of the pixels of the type region to obtain a gradient field of the type region;

training a fitting network based on the noise probability field and the gradient field;

and inputting the pixels of the type area into the fitting network to obtain a fitting image area corresponding to the type area.

10. The method of claim 1, wherein prior to said classifying pixels of the image to be processed and determining at least one pixel type, the method further comprises:

preprocessing the image to be processed to remove noise in the image to be processed, wherein the preprocessing comprises at least one of the following steps: color adjustment, brightness adjustment, and contrast adjustment.

11. An image generation apparatus comprising:

the pixel type classification unit is configured to classify pixels of the image to be processed and determine at least one pixel type;

a type region determination unit configured to determine at least one type region by the at least one pixel type;

a target color adjusting unit configured to adjust, for a type region of the at least one type region, a color of the type region to a target color, resulting in a target color region corresponding to the type region;

a target image generation unit configured to combine at least one target color region corresponding to the at least one type of region into a target image.

12. An electronic device, comprising:

one or more processors;

a storage device having one or more programs stored thereon,

when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-10.

13. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1 to 10.

Technical Field

Embodiments of the present disclosure relate to the field of image processing technologies, and in particular, to an image generation method, an image generation device, an electronic device, and a computer-readable medium.

Background

Images generally include a variety of colors and content, and can convey rich information. In order to further improve the correlation between the image and the scene and meet the scene requirement, the color of the existing image can be adjusted. The existing method has the following defects in the process of adjusting the color of the image: the color is not adjusted in a partitioning mode according to the actual characteristics of the image, so that the problems of noise, abrupt and unsmooth color and the like are easily caused after the image is integrally adjusted.

Disclosure of Invention

This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.

Some embodiments of the present disclosure propose an image generation method, apparatus, electronic device and computer readable medium to solve the technical problems mentioned in the background section above.

In a first aspect, some embodiments of the present disclosure provide an image generation method, the method comprising: classifying pixels of an image to be processed, and determining at least one pixel type; determining at least one type area through the at least one pixel type; for the type area in the at least one type area, adjusting the color of the type area to a target color to obtain a target color area corresponding to the type area; and combining at least one target color region corresponding to the at least one type of region into a target image.

In a second aspect, some embodiments of the present disclosure provide an image generation apparatus, the apparatus comprising: the pixel type classification unit is configured to classify pixels of the image to be processed and determine at least one pixel type; a type region determining unit configured to determine at least one type region by the at least one pixel type; a target color adjusting unit configured to adjust, for a type region in the at least one type region, a color of the type region to a target color, resulting in a target color region corresponding to the type region; and the target image generating unit is configured to combine at least one target color region corresponding to the at least one type of region into a target image.

In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.

In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.

The above embodiments of the present disclosure have the following beneficial effects: the smoothness of the target image obtained by the image generation method of some embodiments of the present disclosure is improved. Specifically, the reason why the target image is not smooth enough is: the colors are not adjusted for the pixel type and type area of the image to be processed. Based on this, the image generation method of some embodiments of the present disclosure first classifies the pixels of the image to be processed into a plurality of pixel types, and classification from the pixel level can greatly improve the effectiveness of color adjustment. Then, a plurality of type areas are determined based on the plurality of pixel types, and each type area is adjusted to be a corresponding target color area. Therefore, independent color adjustment of different types of areas is realized, and the pertinence of color adjustment is improved. And the color of each type area is independently adjusted, so that the finally obtained target image not only realizes the integral color adjustment, but also realizes the targeted adjustment of each type area. Thus, the smoothness of the target image color is greatly improved, and noise reduction is facilitated.

Drawings

The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.

Fig. 1 is a schematic diagram of an application scenario of an image generation method of some embodiments of the present disclosure;

FIG. 2 is a flow diagram of some embodiments of an image generation method according to the present disclosure;

FIG. 3 is a flow diagram of further embodiments of an image generation method according to the present disclosure;

FIG. 4 is a flow diagram of still further embodiments of image generation methods according to the present disclosure;

FIG. 5 is a schematic structural diagram of some embodiments of an image generation apparatus according to the present disclosure;

FIG. 6 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure;

FIG. 7 is a diagram of a color patch structure within a variation area;

FIG. 8 is a block diagram corresponding to the proximity relationship of FIG. 7;

FIG. 9 is a block diagram of the proximity relationship of a proximity matching image within the proximity matching image library.

Detailed Description

Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.

It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.

It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.

It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.

The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.

The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.

Fig. 1 is a schematic diagram of one application scenario of an image generation method according to some embodiments of the present disclosure.

After receiving the image to be processed 102, the electronic device 101 may classify each pixel included in the image to be processed 102 to obtain at least one pixel type 103. For example, the pixel type 103 may be a solid color type, a progressive type, a highlight type, a shadow type, a texture type, a noise type, and the like. Wherein the pure color type may be that the color of the pixel is the same as the color of the surrounding pixels; the progressive type may be that the color of the pixel approximates the color of the surrounding pixels; the highlight type may be that the brightness of the pixel exceeds a set first brightness threshold; the shadow type may be that the brightness of the pixel is below a set second brightness threshold; the texture type may be that the pixel contains texture features; the noise type may be a pixel that has a high probability of being noisy. Then, the electronic device 101 further analyzes each pixel type 103 to determine at least one type area 104 corresponding to at least one pixel type 103. For example, the type area 104 may be a change area, a stable area, an adjustment area, and a deletion area. Wherein the change area may be an image area for color adjustment; the stable region may be considered as an image region where no color adjustment is required; the adjustment region may be an image region in which colors are adjusted according to characteristics of an image to be processed; the deletion area may be an image area in which image content needs to be deleted. The execution agent adjusts each type area 104 into a corresponding target color area, and then combines the target color areas to form a target image. In this way, the pixel type and the type area of the image to be processed 102 are taken into consideration, and color adjustment is performed independently according to different type areas. In this way, the noise included in the target image after the color adjustment of the image to be processed 102 is reduced, and the smoothness of the colors in the target image is improved.

It should be understood that the number of electronic devices 101 in fig. 1 is merely illustrative. There may be any number of electronic devices 101, as desired for implementation.

With continued reference to fig. 2, fig. 2 illustrates a flow 200 of some embodiments of an image generation method according to the present disclosure. The image generation method comprises the following steps:

step 201, classifying pixels of the image to be processed, and determining at least one pixel type.

In some embodiments, the execution subject of the image generation method (e.g., the electronic device 101 shown in fig. 1) may receive the image to be processed through a wired connection or a wireless connection. It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G/5G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.

In order to specifically analyze the image to be processed, the execution subject may analyze pixels of the image to be processed, dividing the pixels into a plurality of different pixel types. For example. The execution subject may divide the pixels into adjustable pixels or non-adjustable pixels, etc. according to the conditions of brightness, contrast, etc. of the pixels. The pixels are classified, and the accuracy and the effectiveness of adjusting the color of the image to be processed from the pixel level are realized.

At step 202, at least one type area is determined by the at least one pixel type.

In some embodiments, different color processing manners may be adopted for image areas formed by pixels corresponding to different pixel types. I.e. the image areas made up of pixels of different pixel types may be different type areas. For example, an image area constituted by adjustable pixels is an image area in which colors can be adjusted; the image area including the non-adjustable pixels is an image area in which color adjustment is not possible. Therefore, the region division of the image to be processed is realized, and the pertinence and the effectiveness of color adjustment are improved.

Step 203, for the type area in the at least one type area, adjusting the color of the type area to a target color to obtain a target color area corresponding to the type area.

The execution subject may adjust the color of each type region to a corresponding target color, respectively, resulting in a corresponding target color region. Therefore, the situation that the color is abrupt after the color adjustment is greatly reduced.

And 204, combining at least one target color region corresponding to the at least one type of region into a target image.

The target color area is obtained by performing color adjustment on the type area. Therefore, the target image formed by combining the target color areas realizes the local color adjustment of the image to be processed, and the overall effect of the color adjustment is improved.

The smoothness of the target image obtained by the image generation method of some embodiments of the present disclosure is improved. Specifically, the reason why the target image is not smooth enough is: the colors are not adjusted for the pixel type and type area of the image to be processed. Based on this, the image generation method of some embodiments of the present disclosure first classifies the pixels of the image to be processed into a plurality of pixel types, and classification from the pixel level can greatly improve the effectiveness of color adjustment. Then, a plurality of type areas are determined based on the plurality of pixel types, and each type area is adjusted to be a corresponding target color area. Therefore, independent color adjustment of different types of areas is realized, and the pertinence of color adjustment is improved. And the color of each type area is independently adjusted, so that the finally obtained target image not only realizes the integral color adjustment, but also realizes the targeted adjustment of each type area. Thus, the smoothness of the target image color is greatly improved, and noise reduction is facilitated.

With continued reference to fig. 3, fig. 3 illustrates a flow 300 of some embodiments of an image generation method according to the present disclosure. The image generation method comprises the following steps:

step 301, for the pixel of the image to be processed, importing the pixel into a preset full convolution network to obtain the probability that the pixel belongs to each designated pixel type in the designated pixel type set, and setting the designated pixel type with the maximum probability value as the pixel type of the pixel.

An executing subject of the image generation method (e.g., the electronic device 101 shown in fig. 1) may import pixels of the image to be processed into a preset full convolution network. Wherein, the full convolution network can be trained according to sample pixels of various specified pixel types. After the full convolution network receives the pixel, the probability that the pixel belongs to each specified pixel type in the specified pixel type set can be calculated. Wherein, the set of specified pixel types may include: a solid color type, a progressive type, a highlight type, a shadow type, a texture type, and a noise type. The execution body may set the specified pixel type having the highest probability value as the pixel type of the pixel. In this way, classification of pixels is achieved.

And step 302, determining at least one type area through the at least one pixel type.

In some optional implementations of some embodiments, determining at least one type of region by the at least one pixel type may include: setting an image area composed of the pixels of the solid color type and the progressive type as a change area; setting an image area composed of the highlight type and shadow type pixels as a stable area; setting an image area composed of the pixels of the highlight type, the shadow type and the texture type as an adjustment area; an image area constituted by the above-described noise type pixels is set as a deletion area.

The pure color type and the progressive type of pixels are easily affected by the target color, and the execution subject may set an image area composed of the above-described pure color type and progressive type of pixels as the change area. The highlight-type and shadow-type pixels are insensitive to color adjustment, and the execution subject may set an image area composed of the highlight-type and shadow-type pixels as a stable area. In the color adjustment process, an image area formed by mixing the pixels of the highlight type, the shadow type and the texture type can more easily embody the texture characteristics of the graph to be processed, and the execution subject can set the image area formed by the pixels of the highlight type, the shadow type and the texture type as an adjustment area. The noise type pixels are prone to have noise points after color adjustment during color adjustment, and the execution subject may set the image area composed of the noise type pixels as a deletion area.

Step 303, for the type area in the at least one type area, adjusting the color of the type area to a target color to obtain a target color area corresponding to the type area.

In some optional implementations of some embodiments, for a type region in the at least one type region, adjusting the color of the type region to a target color to obtain a target color region corresponding to the type region includes: and in response to the fact that the type area is a change area and the target color is a single color, adjusting the color of the type area to the single color to obtain a first target color area.

When the type area is a change area and the target color is a single color, the execution subject may directly adjust the color of the type area to the single color to obtain a first target color area.

In some optional implementations of some embodiments, for a type region in the at least one type region, adjusting the color of the type region to a target color to obtain a target color region corresponding to the type region includes:

the method comprises the following steps of firstly, responding to the fact that the type area is a change area and the target color is multicolor, carrying out color clustering on the type area, and obtaining at least one color block.

When the type area is a change area and the target color is multicolor, the execution subject may perform color clustering on the type area to obtain at least one color patch in order to ensure smoothness of the adjusted color. A color patch can be considered to be a small image area consisting of identical or similar pixels.

And secondly, determining the adjacent relation between the at least one color block.

The change area is composed of pixels of a solid color type and a progressive type, and therefore, the resulting plurality of color patches have similarity. The execution subject may determine the proximity relationship between a plurality of color patches in units of color patches. The proximity relationship may be considered as a relationship such as overlapping or containing between color patches of different colors.

And thirdly, setting the color of the target image which is in the preset adjacent matching image library and has the same adjacent relation as the target color of the type area to obtain a second target color area.

The preset adjacent matching image library contains adjacent matching images with colors adjusted under various adjacent relation conditions. The executing agent may set the color of the target image in the proximity matching image library, which is the same as the proximity relation described above, as the target color of the type region, resulting in a second target color region. Wherein the target image may comprise the plurality of colors. At this time, the second target color region may include a plurality of different color regions. For example, the target color is red, and pink may exist in the second target color region. Therefore, the pertinence and the effectiveness of color adjustment of the type area are improved, the smoothness of the target image is improved, and the situation that the image is sharp is reduced.

FIG. 9 is a block diagram of the proximity relationship of a proximity matching image within the proximity matching image library. The figure includes: a. b, c, d, e, f, g, h and the like. Each color patch is represented by a circle, and the numbers at the lower right corner of the circle represent that the color patch has several nodes. For example, the number of the lower right corner of the color patch b is 2, which indicates that there are two nodes of the color patch f and the color patch g under the color patch b. The color patches corresponding to FIG. 8 below have a, b, c, d, g, and h. Accordingly, the colors of the target images corresponding to a, b, c, d, g, and h may be set as the target colors of the type areas in fig. 8.

In some optional implementations of some embodiments, the determining the proximity relationship between the at least one color patch may include:

in the first step, a boundary line of the area to be processed is generated in response to the changed area including the area to be processed composed of the pixels of the pure color type.

When the change area includes a to-be-processed area composed of pixels of a solid color type, the execution subject may perform boundary enhancement on the to-be-processed area, generating a boundary line of the to-be-processed area. For example, the execution subject may identify the pixel type to determine the boundary of the region to be processed, and then color the pixels at the boundary to obtain the boundary line.

And secondly, determining a communication area between the color blocks through the boundary line.

The boundary line determines the area of the color blocks, the relationship of overlapping, containing and the like among the color blocks can be determined through the boundary line, and then the communication area among the color blocks can be determined.

For example, there may be 5 color patches in the variation area, and the structure diagram of the corresponding color patches is shown in fig. 7. In fig. 7, "1" denotes a region to be processed; "2" denotes a first color block; "3" denotes a second color block; "4" denotes a third color patch; "5" denotes a fourth color patch; "6" denotes a fifth color block.

And thirdly, determining the proximity relation between the at least one color block based on the connected region.

The connected region is at least related to two color blocks, the color blocks with overlapping or containing equal relations can be determined through the connected region, and the color blocks related to the connected region are set to be the color blocks with adjacent relations. The proximity relation may be an overlapping or containing relation between color patches. As can be seen from fig. 7, the color patches 2, 3, and 6 are relatively independent. Color patches 4 are respectively overlapped with the color patches 2 and 3; the structure diagram of the corresponding proximity relation is shown in fig. 8, where the color cell 5 is located in the color cell 3.

In some optional implementations of some embodiments, for a type region in the at least one type region, adjusting the color of the type region to a target color to obtain a target color region corresponding to the type region includes:

the first step, responding to the type area as the adjusting area, extracting the texture feature of the adjusting area.

The adjustment region is composed of pixels of highlight type, shadow type, and texture type. When the type area is an adjustment area, the execution subject may extract texture features of the adjustment area. Texture features can be characterized by the rgb values of the pixels.

And secondly, setting the color of a target texture image which is in the same texture characteristic in a preset texture image library as the target color of the type area to obtain a third target color area.

The execution subject may match the texture features with texture images in the texture image library, thereby determining a successfully matched target texture image. Then, the executing subject may set the color of the target texture image to the target color of the type region, resulting in a third target color region. Here, the target texture image is an image related to a target color of the image to be processed. Therefore, the pertinence and the effectiveness of color adjustment of the type area are improved, the smoothness of the target image is improved, and the situation that the image is sharp is reduced.

Specifically, the execution subject may cluster the extracted texture features to obtain a plurality of different texture clusters. Then, the execution subject may search the texture image library for the target texture image by the rgb values corresponding to the texture features. The formula for characterizing the corresponding texture feature points by the gaussian kernel function is as follows:

the gaussian function gradient is further calculated by the above formula as follows:

a gaussian kernel function accumulation can then be performed, the accumulation formula being as follows:

wherein C (r, g, b) is a Gaussian kernel function of r, g, b; r is0Searching R in the color values RGB of the adjacent texture feature points; g0Searching for G in the RGB color values of the adjacent texture feature points; b0B in the color values RGB of the texture feature points is searched for the neighbors.

To search the texture image library for texture features similar to the texture feature points, the executive may update the rgh values corresponding to the texture feature points to find the target texture image. The formula for the rgh value update is as follows:

wherein t is the current iterative search; t +1 is the next iterative search.

In this way, the texture images in the texture image library can be searched one by one until the target texture image is found.

In some optional implementations of some embodiments, for a type region in the at least one type region, adjusting the color of the type region to a target color to obtain a target color region corresponding to the type region includes:

and step one, responding to the type area as a deleted area, and fitting the type area to obtain a fitted image area.

When the type area is a deleted area, since the deleted area is mainly composed of pixels of a noise type, the color cannot be adjusted directly on the basis of the deleted area. The execution subject may first fit the type region to obtain a fitted image region. The fit image region may be regarded as an image region with reduced noise.

And secondly, superposing the fitting image area and the type area to obtain a fourth target color area.

The executing entity may superimpose the fitted image region with the type region such that noise is reduced or eliminated in the fourth target color region.

In some optional implementations of some embodiments, the fitting the type region to obtain a fitted image region may include:

the method comprises the following steps of firstly, obtaining the noise probability of each pixel in the type area, and obtaining a noise probability matrix corresponding to the type area.

The execution subject may acquire a noise probability of each pixel within the type region, and then construct a noise probability matrix according to coordinates of the pixels within the type region or a positional relationship between the pixels.

And secondly, accumulating the noise probability in the noise probability matrix to obtain a noise probability field.

The execution subject can accumulate the noise probability in the noise probability matrix by means of a Gaussian kernel function and the like to obtain a noise probability field. The gaussian kernel function can be used for calculating the euclidean distance between each noise pixel and the midpoint of the type region, and can be used for representing the difference between the two noise pixels in the application.

The formula for the calculation of the noise probability field is as follows:

wherein, F (x, y) is the probability field of the pixel with x as the abscissa and y as the ordinate; NOISE(x,y)Determining a probability that the pixel is noise for a previous FCN network; g (x, y) is a Gaussian kernel function; sigma (NOISE)(x,y)G (x, y)) is the activation function mapping function for all noise fields.

The calculation formula of the gaussian kernel function is as follows:

where σ is the variance of the gaussian function.

And thirdly, calculating the image gradient of the pixels of the type region to obtain a gradient field of the type region.

The execution subject may further calculate an image gradient of pixels of the type region, resulting in a gradient field of the type region. The image gradient refers to the change rate of a certain pixel of an image in the X and Y directions (compared with adjacent pixels), is a two-dimensional vector and consists of 2 components, and the change of the X axis and the change of the Y axis.

The image gradient is calculated by the formula:

wherein G isxyIs the gradient of a pixel with x as abscissa and y as ordinateA value; thresh is the threshold value for the gradient thresholding.

And calculating the image gradient of each pixel to obtain a gradient field of the type region.

And fourthly, training a fitting network based on the noise probability field and the gradient field.

After the noise probability field and the gradient field are obtained, the execution subject can train the fitting network through noise pixels corresponding to the noise probability field and the gradient field. The obtained noise field can correspond to a non-noise field. The value of each pixel point on the non-noise field is the difference value between 1 and the value of the pixel point on the corresponding noise field. In particular, the execution subject may first train the attention function through the non-noise probability field and the gradient field. Wherein, the calculation formula of the attention function is as follows:

A(x,y)=wg·g(x,y)+wn·(1.0-F(x,y))

wherein A (x, y) is a network training attention function of pixels with x as abscissa and y as ordinate; w is agAttention weight for an edge; g (x, y) is an edge attention function of edge pixels with x on the abscissa and y on the ordinate; w is anThe pixel point is a non-noise attention item, and the value of the attention item is the difference between 1 and noise; n is the non-noise case of the pixel.

Wherein, the calculation formula of g (x, y) is as follows:

wherein g (x, y) is an edge attention calculation function; wNA weight that is edge connectivity; c. CiBelongs to a certain category by clustering for gradient points; i is the ith category;the number of gradient points in total for the ith class;activation mapping for the number of gradient points; wGAs a means of attention for connectivityThe weight of the mechanism; sigma (G)xy) An activation function that is a gradient value; ciThe ith category after the clustering is finished.

The input of the fitting network can be a noise pixel, and the output of the fitting network can be a pixel of a noise probability field and a gradient field meeting set requirements. Wherein the set desired noise probability field and gradient field are related to the target color of the type of region. In this way, the fitting network can be made to pass through the pixels that satisfy the noise probability field and the gradient field output for the target color, so that the output pixels satisfy the requirements of the target color.

The calculation formula of the loss function loss of the fitting network is as follows:

wherein A isxyAttention weight in the above formula; rxyThe R value in RGB of the pixel with x as the abscissa and y as the ordinate; gxyThe G value in RGB of the pixel with x as the abscissa and y as the ordinate; b isxyB value in RGB of pixel with x abscissa and y ordinate; r is the R value in RGB at the output pixel x, y of the fitting network; g is the G value in RGB at the x and y positions of the output pixels of the fitting network; b is the B value in RGB at the fitted network output pixel x, y.

And fifthly, inputting the pixels of the type area into the fitting network to obtain a fitting image area corresponding to the type area.

After the fitting network is obtained, the execution main body can input the pixels of the type area into the fitting network to obtain a fitting image area corresponding to the type area.

The calculation formula for each pixel of the fitted image area is as follows:

Pout=(1-noise(x,y))·Poriginal+noise(x,y)·Ppredict

wherein, PoutThe final output noise-removed picture pixel is obtained; noise (x, y) is the probability that a pixel with x on the abscissa and y on the ordinate is noise; poriginalAs pixel values of the original primitive;PpredictPixel values predicted for the net.

Therefore, the pertinence and the effectiveness of color adjustment of the type area are improved, the smoothness of the target image is improved, and the situation that the image is sharp is reduced.

And step 304, combining at least one target color region corresponding to the at least one type of region into a target image.

The content of step 304 is the same as that of step 204, and is not described in detail here.

With further reference to fig. 4, a flow 400 of further embodiments of an image generation method is shown. The flow 400 of the image generation method includes the following steps:

step 401, preprocessing the image to be processed, and removing noise in the image to be processed.

In some embodiments, an execution subject (e.g., the electronic device 101 shown in fig. 1) on which the image generation method operates may pre-process the image to be processed, so as to eliminate noise that is more noticeable in the image to be processed. Wherein the pre-treatment may comprise at least one of: sharpness adjustment, brightness adjustment, and contrast adjustment.

Step 402, classifying pixels of the image to be processed, and determining at least one pixel type.

At step 403, at least one type area is determined by the at least one pixel type.

Step 404, for the type area in the at least one type area, adjusting the color of the type area to a target color to obtain a target color area corresponding to the type area.

And 405, combining at least one target color region corresponding to the at least one type of region into a target image.

The contents of steps 402 to 405 are the same as those of steps 201 to 204, and are not described in detail here.

As can be seen from fig. 4, compared with the description of some embodiments corresponding to fig. 2, some embodiments corresponding to fig. 4 remove noise before processing the image. Therefore, the data size of the subsequent processing of the image to be processed is smaller, and the smoothness of the obtained target image is better.

With further reference to fig. 5, as an implementation of the methods illustrated in the above figures, the present disclosure provides some embodiments of an image generation apparatus, which correspond to those illustrated in fig. 2, and which may be particularly applicable in various electronic devices.

As shown in fig. 5, the image generation apparatus 500 of some embodiments includes: a pixel type classification unit 501, a type region determination unit 502, a target color adjustment unit 503, and a target image generation unit 504. The pixel type classification unit 501 is configured to classify pixels of an image to be processed, and determine at least one pixel type; a type region determining unit 502 configured to determine at least one type region by the at least one pixel type; a target color adjusting unit 503 configured to, for a type region in the at least one type region, adjust the color of the type region to a target color, resulting in a target color region corresponding to the type region; a target image generating unit 504 configured to combine at least one target color region corresponding to the at least one type of region into a target image.

In an optional implementation manner of some embodiments, the pixel type classification unit 501 may include: a pixel type classification subunit (not shown in the figure), configured to, for a pixel of the to-be-processed image, import the pixel into a preset full convolution network, obtain a probability that the pixel belongs to each specified pixel type in a specified pixel type set, and set a specified pixel type with a highest probability value as a pixel type of the pixel, where the specified pixel type set includes: a solid color type, a progressive type, a highlight type, a shadow type, a texture type, and a noise type.

In an optional implementation manner of some embodiments, the type area determining unit 502 may include: a change area setting sub-unit (not shown), a stable area setting sub-unit (not shown), an adjustment area setting sub-unit (not shown), and a deletion area setting sub-unit (not shown). Wherein the change region setting subunit is configured to set an image region made up of the pixels of the solid color type and the progressive type as a change region; a stable region setting subunit configured to set an image region constituted by the pixels of the highlight type and the shadow type as a stable region; an adjustment region setting subunit configured to set an image region made up of the pixels of the highlight type, the shadow type, and the texture type as an adjustment region; and a deletion area setting subunit configured to set an image area constituted by the above-described noise-type pixels as a deletion area.

In an optional implementation manner of some embodiments, the target color adjusting unit 503 may include: and a first target color region adjusting subunit (not shown in the figure) configured to adjust the color of the type region to the monochrome to obtain a first target color region in response to the type region being the change region and the target color being the monochrome.

In an optional implementation manner of some embodiments, the target color adjusting unit 503 may include: a color patch acquiring subunit (not shown in the figure), a proximity relation determining subunit (not shown in the figure), and a second target color region adjusting subunit (not shown in the figure). The color block acquiring subunit is configured to perform color clustering on the type region to obtain at least one color block in response to that the type region is a change region and the target color is multicolor; a proximity relation determining subunit configured to determine a proximity relation between the at least one color patch; a second target color region adjusting subunit, configured to set a color of a target image in the preset proximity matching image library, which is the same as the proximity relation, as a target color of the type region, so as to obtain a second target color region; the target image includes the plurality of colors.

In an alternative implementation of some embodiments, the proximity relation determining subunit may include: a boundary line generating module (not shown in the figure), a connected component determining module (not shown in the figure), and a proximity relation determining module (not shown in the figure). The boundary line generating module is configured to generate a boundary line of the to-be-processed area in response to that the change area comprises a to-be-processed area formed by pixels of a pure color type; a connected region determining module configured to determine a connected region between the color patches through the boundary line; a proximity relation determination module configured to determine a proximity relation between the at least one color patch based on the connected region.

In an optional implementation manner of some embodiments, the target color adjustment unit may include: a texture feature extraction subunit (not shown in the figure) and a third target color region adjustment subunit (not shown in the figure). The texture feature extraction subunit is configured to, in response to the type region being an adjustment region, extract a texture feature of the adjustment region; and the third target color area adjusting subunit is configured to set the color of the target texture image in the preset texture image library, which is the same as the texture feature, as the target color of the type area, so as to obtain a third target color area.

In an optional implementation manner of some embodiments, the target color adjusting unit 503 may include: a fit image region acquisition subunit (not shown in the figure) and a fourth target color region adjustment subunit (not shown in the figure). The fitting image area obtaining subunit is configured to respond to the type area as a deletion area, and fit the type area to obtain a fitting image area; and the fourth target color area adjusting subunit is configured to superimpose the fitted image area and the type area to obtain a fourth target color area.

In an optional implementation manner of some embodiments, the fitting image region obtaining subunit may include: a noise probability matrix obtaining module (not shown in the figure), a noise probability field obtaining module (not shown in the figure), a gradient field obtaining module (not shown in the figure), a fitting network training module (not shown in the figure) and a fitting image area obtaining module (not shown in the figure). The noise probability matrix acquisition module is configured to acquire the noise probability of each pixel in the type region to obtain a noise probability matrix corresponding to the type region; a noise probability field obtaining module configured to accumulate noise probabilities in the noise probability matrix to obtain a noise probability field; the gradient field acquisition module is configured to calculate the image gradient of the pixels of the type region to obtain a gradient field of the type region; a fitting network training module configured to train a fitting network based on the noise probability field and the gradient field; and the fitting image area acquisition module is configured to input the pixels of the type area into the fitting network to obtain a fitting image area corresponding to the type area.

In an optional implementation manner of some embodiments, the image generation apparatus 500 may further include: a noise removing unit (not shown in the figure) configured to perform a pre-processing on the image to be processed to remove noise in the image to be processed, wherein the pre-processing includes at least one of: sharpness adjustment, brightness adjustment, and contrast adjustment.

It will be understood that the elements described in the apparatus 500 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 500 and the units included therein, and are not described herein again.

As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.

Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.

In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 609, or installed from the storage device 608, or installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of some embodiments of the present disclosure.

It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.

In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.

The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: classifying pixels of an image to be processed, and determining at least one pixel type; determining at least one type area through the at least one pixel type; for the type area in the at least one type area, adjusting the color of the type area to a target color to obtain a target color area corresponding to the type area; and combining at least one target color region corresponding to the at least one type of region into a target image.

Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes a pixel type classification unit, a type region determination unit, a target color adjustment unit, and a target image generation unit. Where the names of the units do not in some cases constitute a limitation of the units themselves, for example, the target image generation unit may also be described as a "unit for combining the acquired target color regions into a target image".

The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.

The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

21页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于复杂度引导相位恢复的散斑成像重建方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!