Generating a depth map for an input image using an example approximate depth map associated with an example similar image

文档序号:1288712 发布日期:2020-08-28 浏览:12次 中文

阅读说明:本技术 使用与示例相似图像相关联的示例近似深度映射图对输入图像生成深度映射图 (Generating a depth map for an input image using an example approximate depth map associated with an example similar image ) 是由 德巴尔加·慕克吉 吴琛 王盟 于 2012-11-02 设计创作,主要内容包括:使用与示例相似图像相关联的示例近似深度映射图对输入图像生成深度映射图。一种图像转换器接收要转换为第一三维图像的二维图像。该图像转换器基于与第二三维图像相关联的近似深度映射图计算特征至深度映射函数。该图像转换器将该特征至深度映射函数应用于该二维图像的多个像素以对该多个像素中的每一个确定深度值并且基于该二维图像的多个像素的深度值生成第一三维图像。(A depth map is generated for the input image using an example approximate depth map associated with an example similar image. An image converter receives a two-dimensional image to be converted into a first three-dimensional image. The image converter calculates a feature-to-depth mapping function based on an approximate depth map associated with the second three-dimensional image. The image converter applies the feature-to-depth mapping function to a plurality of pixels of the two-dimensional image to determine a depth value for each of the plurality of pixels and generates a first three-dimensional image based on the depth values of the plurality of pixels of the two-dimensional image.)

1. A method, comprising:

receiving a two-dimensional image to be converted into a first three-dimensional image;

identifying a second three-dimensional image from a plurality of three-dimensional images based on the second three-dimensional image being visually similar to the two-dimensional image to be converted, and wherein the second three-dimensional image was previously created from a conversion of another two-dimensional image;

calculating, by a processing device, a feature-to-depth mapping function based on an approximate depth map associated with the second three-dimensional image, wherein a feature comprises a color value of a pixel of the second three-dimensional image, and wherein calculating the feature-to-depth mapping function comprises:

identifying a plurality of binning blocks within the color space to reduce the number of colors considered by the mapping function;

associating a plurality of pixels of the second three-dimensional image with the plurality of merged blocks;

determining a depth value for each of the plurality of merged blocks; and

determining a weight value for each of the plurality of binning blocks, wherein the weight value for each binning block is a function of the occupancy ratio within that binning block and the average occupancy of any binning block; and wherein the weight values are used to smooth the mapping function by reassigning a weighted average of neighboring binning blocks of at least one binning block to a depth value of the at least one binning block;

applying the feature-to-depth mapping function to a plurality of pixels of the two-dimensional image to determine a depth value for each of the plurality of pixels by associating a depth value for each of the plurality of pixels based on a feature of the two-dimensional image; and

generating the first three-dimensional image based on depth values of the plurality of pixels of the two-dimensional image.

2. The method of claim 1, further comprising:

logically dividing the second three-dimensional image into a plurality of partitions; and

calculating a plurality of feature-to-depth mapping functions associated with the second three-dimensional image, wherein each of the plurality of feature-to-depth mapping functions is associated with a different one of the plurality of regions of the second three-dimensional image.

3. The method of claim 1, wherein applying the feature-to-depth mapping function comprises:

identifying a feature of each of the plurality of pixels of the two-dimensional image;

calculating a depth value for each of the plurality of pixels of the two-dimensional image, wherein calculating the depth value for each of the plurality of pixels comprises calculating an n-linear interpolation of the depth values for the plurality of pixels of the two-dimensional image.

4. The method of claim 1, wherein the color space is Y-Cb-Cr.

5. A system, comprising:

a processing device; and

a memory coupled to the processing device; and

an image converter executed by the processing device from the memory to:

receiving a two-dimensional image to be converted into a first three-dimensional image;

identifying a second three-dimensional image from a plurality of three-dimensional images based on the second three-dimensional image being visually similar to the two-dimensional image to be converted, and wherein the second three-dimensional image was previously created from a conversion of another two-dimensional image;

identifying a feature-to-depth mapping function based on an approximate depth map associated with the second three-dimensional image, wherein a feature comprises a color value of a pixel of the second three-dimensional image, and wherein, in identifying the feature-to-depth mapping function, the image converter:

identifying a plurality of binning blocks within the color space to reduce the number of colors considered by the mapping function;

associating a plurality of pixels of the second three-dimensional image with the plurality of merged blocks;

determining a depth value for each of the plurality of merged blocks; and

determining a weight value for each of the plurality of binning blocks, wherein the weight value for each binning block is a function of the occupancy ratio within that binning block and the average occupancy of any binning block; and wherein the weight values are used to smooth the mapping function by reassigning a weighted average of neighboring binning blocks of at least one binning block to a depth value of the at least one binning block;

applying the feature-to-depth mapping function to a plurality of pixels of the two-dimensional image to determine a depth value for each of the plurality of pixels by associating a depth value for each of the plurality of pixels based on a feature of the two-dimensional image; and

generating the first three-dimensional image based on depth values of the plurality of pixels of the two-dimensional image.

6. The system of claim 5, the image converter further to:

logically dividing the second three-dimensional image into a plurality of partitions; and

calculating a plurality of feature-to-depth mapping functions associated with the second three-dimensional image, wherein each of the plurality of feature-to-depth mapping functions is associated with a different one of the plurality of regions of the second three-dimensional image.

7. The system of claim 5, wherein in applying the feature-to-depth mapping function, the image converter:

identifying a feature of each of the plurality of pixels of the two-dimensional image; and

calculating a depth value for each of the plurality of pixels of the two-dimensional image, wherein calculating a depth value for each of the plurality of pixels comprises calculating an n-linear interpolation of the depth values for the plurality of pixels of the two-dimensional image.

8. The system of claim 5, wherein the color space is Y-Cb-Cr.

9. A non-transitory machine-readable storage medium storing instructions that, when executed, cause a processing device to perform operations comprising:

receiving a two-dimensional image to be converted into a first three-dimensional image;

identifying a second three-dimensional image from a plurality of three-dimensional images based on the second three-dimensional image being visually similar to the two-dimensional image to be converted, and wherein the second three-dimensional image was previously created from a conversion of another two-dimensional image;

calculating, by a processing device, a feature-to-depth mapping function based on an approximate depth map associated with the second three-dimensional image, wherein a feature comprises a color value of a pixel of the second three-dimensional image, and wherein calculating the feature-to-depth mapping function comprises:

identifying a plurality of binning blocks within the color space to reduce the number of colors considered by the mapping function;

associating a plurality of pixels of the second three-dimensional image with the plurality of merged blocks;

determining a depth value for each of the plurality of merged blocks; and

determining a weight value for each of the plurality of binning blocks, wherein the weight value for each binning block is a function of the occupancy ratio within that binning block and the average occupancy of any binning block; and wherein the weight values are used to smooth the mapping function by reassigning a weighted average of neighboring binning blocks of at least one binning block to a depth value of the at least one binning block;

applying the feature-to-depth mapping function to a plurality of pixels of the two-dimensional image to determine a depth value for each of the plurality of pixels by associating a depth value for each of the plurality of pixels based on a feature of the two-dimensional image; and

generating the first three-dimensional image based on depth values of the plurality of pixels of the two-dimensional image.

10. The non-transitory machine-readable storage medium of claim 9, the operations further comprising:

logically dividing the second three-dimensional image into a plurality of partitions; and

calculating a plurality of feature-to-depth mapping functions associated with the second three-dimensional image, wherein each of the plurality of feature-to-depth mapping functions is associated with a different one of the plurality of regions of the second three-dimensional image.

11. The non-transitory machine-readable storage medium of claim 9, wherein applying the feature-to-depth mapping function comprises:

identifying a feature of each of the plurality of pixels of the two-dimensional image; and

calculating a depth value for each of the plurality of pixels of the two-dimensional image, wherein calculating the depth value for each of the plurality of pixels comprises calculating an n-linear interpolation of the depth values for the plurality of pixels of the two-dimensional image.

Technical Field

The present disclosure relates to the field of image processing, in particular to the conversion of monoscopic visual content to stereoscopic 3D.

Background

Advances in display technology have made increasingly popular display devices capable of converting stereoscopic perception of three-dimensional (3D) depth into views. These 3D displays may be found in High Definition (HD) televisions, gaming devices, and other computing devices. The growing number of 3D displays has driven the need for additional 3D visual content (e.g. images, video). Conventionally, creating 3D content is a difficult and time consuming process. For example, a content creator would capture objects using two cameras, combine the video or images from each camera, and use special software to make the 3D effect look accurate. This typically involves a lengthy, highly technical and labor intensive process. In addition, conventional techniques for converting two-dimensional (2D) images and videos to 3D may not scale where a large number of images or videos are converted with time and resources. Furthermore, the conventional techniques are limited to converting specific types of images and videos and cannot be used for general 2D to 3D conversion tasks.

Disclosure of Invention

The following presents a simplified summary of the disclosure in order to provide a basic understanding of some aspects of the disclosure. This summary is not an extensive overview of the disclosure. It is not intended to identify key or critical elements of the disclosure or to delineate any scope of the particular embodiments of the disclosure or any scope of the claims. Its sole purpose is to present some concepts of the disclosure in a simplified form as a prelude to the more detailed description that is presented later.

In one embodiment, an image converter receives a two-dimensional image to be converted into a first three-dimensional image. The image converter calculates a feature-to-depth mapping function based on an approximate depth map associated with the second three-dimensional image. The image converter applies the feature-to-depth mapping function to a plurality of pixels of the two-dimensional image to determine a depth value for each of the plurality of pixels and generates a first three-dimensional image based on the depth values of the plurality of pixels of the two-dimensional image.

In one embodiment, a method includes receiving a two-dimensional image to be converted into a first three-dimensional image; computing a feature-to-depth mapping function based on an approximate depth map associated with the second three-dimensional image; applying the feature-to-depth mapping function to a plurality of pixels of the two-dimensional image to determine a depth value for each of the plurality of pixels; and generating a first three-dimensional image based on depth values of a plurality of pixels of the two-dimensional image.

Drawings

The present disclosure is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.

Fig. 1 is a block diagram illustrating an exemplary network architecture in which aspects of the present disclosure may be implemented.

Fig. 2A is a block diagram illustrating an image converter for generating a depth map using an approximate depth map associated with an example similar image, according to one embodiment.

Fig. 2B is a diagram illustrating an image divided into partitions for a feature-to-depth map, according to one embodiment.

FIG. 3 is a block diagram illustrating the flow of an image conversion process according to one embodiment.

FIG. 4 is a diagram illustrating a feature-to-depth mapping function, according to one embodiment.

FIG. 5 is a diagram illustrating a depth map computed in accordance with one embodiment.

FIG. 6 is a flow diagram illustrating a method for image conversion according to one embodiment.

FIG. 7 is a flow diagram illustrating a method for computing a feature-to-depth mapping function, according to one embodiment.

FIG. 8A is a flow diagram illustrating a method for applying a feature-to-depth mapping function to an input image, according to one embodiment.

Fig. 8B is a diagram illustrating one-dimensional linear interpolation between color depth binning blocks (bins) according to one embodiment.

FIG. 9 is a block diagram illustrating one embodiment of a computer system in accordance with an aspect of the present disclosure.

Detailed Description

Embodiments are described for generating a depth map for a two-dimensional (2D) input image. The depth map may be used to convert a 2D input image into a three-dimensional (3D) output image. The 2D input image may be provided by a user or selected from a database of available images. The image converter described herein may access another database storing 3D images. These 3D images may be initially captured in 3D or may have been previously converted from 2D to 3D. The image converter may identify a 3D example image from the database that is visually similar to the 2D input image. The visually similar images may share a number of features with the 2D input image, such as similar colors, similar subjects, possibly taken at similar locations, and so on. Although details are provided herein, particularly with respect to images as examples, for clarity of explanation, it should be appreciated that such details may be equally applicable to other types of media, such as videos, documents, text (e.g., tweets), animated content, and so forth, as applicable.

Since the visually similar example image is a 3D image, depth information can be known for the image. If the depth information is not known or readily available, then depth interpolation techniques may be used for the calculations. The depth information may be stored in the form of a depth map. The depth map may include a depth value for each pixel in the 3D example image that is used to render a 3D effect for the image. Since the depth map is associated with the 3D example image, it will not directly relate to the 2D input image. Thus, the depth map may be referred to as an approximate depth map with respect to the 2D input image. However, since the 3D example image is visually similar to the 2D input image, this approximate depth map may serve as a good starting point for generating a final depth map for the 2D input image.

Using the 3D example image and the approximate depth map, in one embodiment, the image converter can generate a feature-to-depth mapping function that correlates a feature value, such as a color, of each pixel or group of pixels in the 3D example image with a depth value. The image converter may apply the function to known feature values of the 2D input image to generate a depth value for each pixel of the 2D input image. These depth values may form a final depth map for the 2D input image. Using this final depth map, the image converter is able to render a 3D output image based on the 2D input image. Accordingly, a 2D input image can be efficiently converted into a 3D output image.

Embodiments of the conversion techniques described herein provide fully automatic conversion of 2D visual content to 3D. This can allow individuals to avoid the costly and time consuming process of capturing 3D images or manually converting existing 2D images to 3D. Using a feature-to-depth mapping function corresponding to an image that is visually similar to the input image allows for more accurate prediction of the depth values of the image. This, in turn, can lead to a more accurate and realistic rendering of the 3D output image. In addition, using a large database of images for comparison increases the probability that one or more visually similar images can be found, facilitating the conversion of many types of visual content. In one embodiment, there are millions of images available for comparison.

Fig. 1 is a block diagram illustrating an exemplary network architecture in which aspects of the present disclosure may be implemented. According to one embodiment, the network architecture 100 may include one or more servers 102 in communication with one or more user devices 130, 132, 134 over one or more networks 140. The network 140 may be a Local Area Network (LAN), a wireless network, a telephone network, a mobile communications network, a Wide Area Network (WAN) such as the internet, or similar communications system. The user devices 130, 132, 134 may be any type of computing device, including server computers, gateway computers, desktop computers, laptop computers, mobile communication devices, cellular phones, smart phones, handheld computers, tablets, or similar computing devices. The user devices 130, 132, 134 may be variously configured with different features to enable viewing of visual content such as images, videos, and the like.

The server 102 may include network accessible server-based functionality, various data stores, and/or other data processing devices. Server 102 may be implemented by a single machine or a cluster of machines. The server 102 may include, for example, the computer system 900 of fig. 9. In one embodiment, server 102 includes an image converter 110 and a storage device 120. In another embodiment, the storage device 120 may be external to the server 102 and may be connected to the server 102 through a network or other connection. In other implementations, server 102 may include different and/or additional components that are not shown here to avoid obscuring the present disclosure. Storage device 120 may include one or more mass storage devices, which may include, for example, flash memory, magnetic or optical disks, or tape drives, read-only memory (ROM), Random Access Memory (RAM), erasable programmable memory (e.g., EPROM or EEPROM), flash memory, or any other type of storage medium.

In one embodiment, the storage device 120 includes an image data store that includes a plurality of 3D images and a plurality of 2D images or videos. For at least some 2D or 3D images, associated depth maps may also be stored in the storage device 120. In one embodiment, the depth map includes a depth value for each pixel (or each group of pixels) in the image. In another embodiment, for each 3D image, a feature-to-depth mapping database including predetermined feature-to-depth mapping functions may be provided and stored in the storage device 120.

The image converter 110 can receive a user request for converting a 2D input image into a 3D output image. The 2D input image to be converted may be an image previously provided by a user or a computer application and stored in the storage device 120, or an image provided by a user along with a request. The image converter 110 can identify a 3D example image (or images) that is visually similar to the converted 2D image, as well as an approximate depth map associated with the example image. The example similar images and associated approximate depth maps can be stored in the storage device 120. Visually similar 3D example images may be identified by their shared characteristics with the 2D input image, such as similar color, subject, location, environment, and so on. The approximate depth map may include a depth value for each pixel (or group of pixels) in the 3D example image used to render the 3D effect of the image. In another embodiment, the approximate depth map may not be associated with the 3D example image, but may be otherwise selected by the user or the image converter 110 through selection of the available depth maps for use in converting the 2D input image to 3D. As will be described below, the image converter 110 can use the approximate depth map to automatically convert a 2D input image into a 3D output image.

Fig. 2A is a block diagram illustrating an image converter for generating a depth map using an approximate depth map associated with an example similar image, according to an embodiment of the present disclosure. In one embodiment, the image converter 110 may include a feature processing module 212, a feature-to-depth mapping module 214, a depth map generation module 216, and an image rendering module 218. This deployment of modules may be a logical division, and in other embodiments, these modules or other components may be combined together or divided into additional components, depending on the particular implementation. In one embodiment, the storage device 120 may include an image data store 222 and a feature-to-depth mapping database 224 that the image converter 110 uses to perform more accurate 2D-to-3D conversions.

The image converter 110 can receive a 2D input image (e.g., from a user or a computer application) that is to be converted to a 3D output image and can find an example similar image and associated approximate depth map in the data store 222. In one embodiment, a threshold is defined for considering the image from the data store 222 as similar to the converted input image, such as the number of features or characteristics that must be shared between the input image and the image from the data store 222. Once a similar image is identified, a feature-to-depth mapping function is determined for the 3D similar image using an approximate depth map associated with the similar image. In another embodiment, the feature-to-depth mapping function may be determined in advance for the 3D similar image and stored in the feature-to-depth mapping database 224.

In one embodiment, the feature-to-depth mapping function is determined by feature processing module 212 and feature-to-depth mapping module 214. In particular, in one embodiment, each 3D image (or frame of 3D video) in image data store 222 includes or is associated with a stereoscopic image pair that creates the illusion of depth when viewed. Typically, the pair of images includes two images (e.g., a left image and a right image) of the same subject taken from slightly different viewpoints (approximately equivalent to the distance between the human eyes). Thus, each point in the two images will have a slight offset (measured in pixels) that is proportional to the distance from the point of view. This offset may be referred to as disparity. In one implementation, the depth value of each pixel in the approximate depth map associated with the 3D example image may be equal to or proportional to the calculated disparity.

To generate the feature-to-depth mapping function for the 3D example image, features of the 3D example image may be correlated with depth values in the approximate depth map. In one embodiment, the feature processing module 212 can identify one or more features of the 3D example image from the image data store 222. In one embodiment, the feature may include a color value for each pixel in the example image. In other implementations, some other feature may be used, such as motion (or associated motion vectors), position, texture, edges, or image feature-based gradient histograms, such as Scale Invariant Feature Transform (SIFT). The feature-to-depth mapping described herein may be used with any of these or other features.

In one embodiment, the feature processing module 212 can determine the number of merge blocks in a color space (e.g., YUV, RGB, YCbCr). Binning blocks may be used to reduce the number of colors considered. YUV, for example, may have a 24-bit color value, which may be too large for typical feature-to-depth mapping functions. Thus, the feature processing module 212 is able to combine different color values to have a manageable number of merged blocks. For example images from the image data store 222, the feature processing module 212 can associate each pixel with an appropriate binning block based on the color value of the pixel. Using the approximate depth map, the feature processing module 212 may also be able to merge (e.g., average) the depth values of each pixel in a certain merged block to generate a merged depth value for the merged block. Similar combinations may be performed for the colors in each of the remaining binning blocks until a series of data points is obtained.

The feature-to-depth mapping module 214 can generate a feature-to-depth mapping function for the example image based on one or more features of the image and the depth values determined by the feature processing module 212. In one embodiment, the feature used to generate the function may be a color. Each pixel of the example image has a known color value (e.g., determined from the image metadata) and may have a depth value determined by the feature processing module 212 for each color binning block. The feature-to-depth mapping function may aggregate depth values for pixels of the same or similar color based on the assumption that objects of the same or similar color in the image will also have the same or similar depth values. In other implementations, other features, such as texture, position, shape, etc., may be used instead of or in addition to color to generate the feature-to-depth mapping function. The resulting function may ultimately receive as input the color values (or other characteristic values) of certain pixels or other points, and output the depth value for that pixel. Additional details of the feature-to-depth mapping function are described below with respect to fig. 4.

In one embodiment, rather than generating only one feature-to-depth mapping function for a 3D example image, feature-to-depth mapping module 214 can generate multiple feature-to-depth mapping functions for a single example image in image data store 222. For example, different functions may be generated for different regions of the image. In one embodiment, the image from the image data store 222 may be logically divided into a plurality of partitions (e.g., two, four). The image may be sliced (tile) differently depending on the distribution of the dominant spatial variations in the image. For example, for an image of a mountain-sky outdoors, it is sufficient to horizontally divide the image into two parts of half the image of the top and bottom. For an indoor image it may be better to have more spatial components, where the half of the image to the right of the left vs may also have different depth maps. FIG. 2B is an example image 250 illustrating a logical division into four equivalent partitions 252 and 258. The feature-to-depth mapping module 214 can treat each partition 252-258 as its own individual image and determine the feature-to-depth mapping function specific to that partition in the manner described above. In one embodiment, the feature-to-depth mapping module 214 can store a plurality of functions corresponding to image partitions in the database 224. In another embodiment, for example, multiple functions may be combined (e.g., using a linear combination based on the distance from pixel P to the center C1-C4 of each partition) into a single function stored in database 224. The linear combination may eliminate potential "boundary effects" resulting from switching features to depth mapping functions across different partitions. When applying the feature-to-depth mapping to the 2D input image, the final depth values may be interpolated between different regions. When a depth value is calculated for pixel P, its distance from the center of the partition C1-C4 is first calculated and may be represented as d1, d2, d3, and d 4. With the feature-to-depth mapping of the partition 252, a depth value v1 is given to the pixel P. Similarly, v2, v3, and v4 are derived from the other partitions 254-258 using feature-to-depth mapping. The final depth value of P can be calculated by combining v1 to v4 which are inversely proportional to d1 to d4, for example, the depth of P is (v1/d1+ v2/d2+ v3/d3+ v4/d4)/(1/d1+1/d2+1/d3+1/d 4).

When the generation of the feature-to-depth mapping function is complete, the feature-to-depth mapping function for the image from the image data store 222 may be stored in a corresponding entry in the feature-to-depth mapping database 224. In one embodiment, the feature-to-depth mapping database 224 may have some other structure (e.g., a list of key-value pairs) than a database. In one embodiment, the feature-to-depth mapping database 224 may be a separate data structure (as shown), however in other embodiments, the mapping information may be stored in metadata of the image data store 222. The image converter 111 can use the feature-to-depth mapping database 224 to automatically generate a 3D output image from the 2D input image.

In one embodiment, the depth map generation module 216 can compute a depth map for the 2D input image based on the feature-to-depth mapping function determined by the feature-to-depth mapping module 214. The image data and/or metadata received with the input image may include, for example, a color value associated with each pixel in the input image. A depth value may thus be calculated for each pixel by applying a color value (or other suitable feature value) as an input to a feature-to-depth mapping function generated based on the 3D example image and the approximate depth map. The depth map generation module 216 can perform calculations to make this determination and can store the resulting depth values (e.g., the final depth map) in the storage device 120. An example of the resulting depth map 510 is shown in fig. 5.

The image rendering module 218 can render a 3D output image (e.g., a stereoscopic image pair) based on the input image and the depth values calculated by the depth map generation module 216. In one embodiment, rendering module 218 generates the 3D image using Depth Image Based Rendering (DIBR) techniques. DIBR technology can render a 2D image based on another 2D image and a per-pixel depth map. The original 2D image becomes one of two views constituting the 3D image, and the DIBR-rendered 2D image becomes a second view. In one embodiment, the original 2D image is a left view and the rendered 2D image is a right view. In other embodiments, this may be reversed.

As an example, given a per-pixel depth map, a displacement map may be generated that indicates how much each pixel should be moved from a left view to a right view. The relationship between depth and displacement may be approximately linear; however, some parameters may be adjusted to control how many objects "stand out" of the screen or how much they appear to extend behind the screen. Once the displacement map has been generated, the pixels may be translated from left to right view to render right view while ensuring that pixels in front block pixels behind if multiple pixels are mapped from left view to the same pixel in the right rendered image space. Once all pixels have been shifted, some holes may still be left in the rendered right view. Complementary color (in-rendering) (image interpolation) techniques may be employed to fill the hole from neighboring pixels in the rendered image. This results in a final rendered right view. To form a high quality rendering, the rendering may be performed at an intermediate higher resolution pixel grid by interpolation based on the left view and the per-pixel depth map. Once the rendered image is obtained at the higher intermediate resolution, it can be scaled back to the desired resolution. In one embodiment, the rendered image may be filtered using, for example, cross-bilateral filtering. Cross-bilateral filtering is a way of coming through an image with respect to the geometry of objects in the image. For example, when an image is filtered, pixel values may be combined with values from neighboring pixels to remove aliasing, noise, and other undesirable features. This may result in an average value that may not belong to the same object in the image and can therefore result in incoherent values. Cross-bilateral filtering attempts to solve this problem by using multiple (rather than just one) source images to help identify objects. As a result, at the time of filtering, the values of the neighboring pixels can be weighted by their screen space distance, and an expression that takes into account the depth difference is also used to determine whether both pixels belong to the same object. This may help to prevent blurring in the resulting filtered image.

Fig. 3 is a block diagram illustrating an image conversion process flow according to an embodiment of the present disclosure. Various modules and components may be described with respect to their role in generating a depth map for an input image using an approximate depth map associated with an example similar image.

In one embodiment, the process flow 300 begins with receiving a 2D input image at block 310. At block 320, example similar images are identified. At block 322, an approximate depth map is identified. The approximate depth map may be associated with an example similar image or may be associated with some other visually similar image. At block 330, a feature-to-depth mapping function is determined for the example image. The feature-to-depth mapping module 214 can determine a feature-to-depth mapping function for the example similar image based on the features of the example similar image 320 and the approximate depth map 322. At block 340, a depth map is generated for the input image based on the feature-to-depth mapping function. The depth map generation module 216 can generate a depth map by applying a feature value (e.g., color) of each pixel in the input image to a feature-to-depth mapping function to determine depth values. At block 350, the depth map may be used with a rendering technique (e.g., DIBR) to generate a 3D output image 360.

For simplicity of explanation, the flow and methods of the present disclosure are depicted and described as a series of acts. However, acts in accordance with the present disclosure may occur in various orders and/or concurrently, and with other acts not presented and described herein. For example, the identification of the approximate depth map of block 322 and the feature-to-depth map of block 330 associated with the example similar image can be performed prior to receiving the input image at block 310 and storing it, for example, in storage device 120. Once example similar images are identified at block 320, a pre-processed approximate depth map and feature-to-depth mapping function can be obtained and used to generate a depth map at block 340.

Fig. 4 is a diagram illustrating a feature-to-depth mapping function according to an embodiment of the present disclosure. In this embodiment, the feature-to-depth mapping function 400 is based on the color of each pixel in the image. The function 400 may receive as input a color value of a pixel or other point and output a depth value for the pixel. For ease of understanding, fig. 4 illustrates depth as a function of color in one dimension. Those skilled in the art will recognize that in practice the function may show similar characteristics in a multi-dimensional color space (e.g., YUV, RGB). In one embodiment, a binning block is used to reduce the number of colors considered by the function. YUV, for example, may have a 24-bit color value, which may be large as desired for typical feature-to-depth mapping functions. In some cases, more than one thousand six million different colors represented in a 24-bit color scheme may make computing the color-to-depth mapping function too computationally intensive and time consuming. In fig. 4, the color values have been reduced to eight binning blocks (a-H), although some other number may be used in other embodiments. For an image whose color values and depth values are known in the image data store 222, the depth values corresponding to each pixel having a color in a certain binning block (e.g., binning block a) are combined (e.g., averaged) to generate a summarized depth value. This value may be stored as part of function 400. Similar combinations can be performed for the colors in each of the remaining binning blocks until a series of data points is obtained. The feature-to-depth mapping module 214 can perform a form of polynomial fitting (e.g., curve fitting) to generate the feature-to-depth mapping function 400. The resulting function may be expressed as f (color) depth, or in the case of YUV color, as f (YUV) depth. Thus, the depth value for a given pixel may be calculated as a function of the YUV color value for that pixel. The function 400 may be stored in the feature-to-depth mapping database 224.

Fig. 5 is a diagram illustrating a depth map calculated according to an embodiment of the present disclosure. The depth map generation module 216 may compute a depth map for the input image based on a feature-to-depth mapping function, such as the feature-to-depth mapping function 400 determined by the feature-to-depth mapping module 214. In the depth map 510 of fig. 5, the shading is proportional to the distance of the surface of the picture object from the viewpoint in the source image 500. In this embodiment, darker colors indicate depths closer to the viewpoint, while lighter colors indicate depths further away. In other embodiments, the shading may be reversed.

Fig. 6 is a flowchart illustrating a method for image conversion according to an embodiment of the present disclosure. Method 600 may be performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware emulation), or a combination thereof. The method 600 may generate a depth map for an input image using an approximate depth map associated with one or more example similar images. In one embodiment, method 600 may be performed by image converter 110 as shown in FIG. 1.

Referring to fig. 6, at block 610, the method 600 receives a 2D input image for conversion into 3D. The input image may be received from a user (e.g., through a user interface provided by image converter 110), from another computer application (e.g., through an application interface such as an API), or from some other source.

At block 620, the method 600 identifies (e.g., by calculation or retrieval from storage) a feature-to-depth mapping function for the input image based on the approximate depth map associated with the similar 3D image. In one implementation, the feature-to-depth mapping module 214 can aggregate depth values of pixels of the same or similar features (e.g., colors) in the example similar image, e.g., based on an assumption that objects in the image having the same or similar colors will also have the same or similar depth values. The resulting function, such as function 400, can ultimately receive as input a color value (or other characteristic value) of a certain pixel or other point in the input image and output a depth value for that pixel. The feature-to-depth mapping function may be stored in database 224, for example.

At block 630, the method 600 applies the feature-to-depth mapping function identified in block 620 to pixels in the input image to determine a depth value for each pixel. The depth map generation module 216 can generate a resulting depth map for the input image. In one embodiment, depth map generation module 216 can apply color values (or other suitable feature values) as inputs to the feature-to-depth mapping function generated at block 620. This results in a depth value being calculated for each pixel of the input image. The depth values may be combined based on the locations of their associated pixels to form a depth map 510 as shown in fig. 5.

At block 640, the method 600 generates a stereo pair for the 3D output image. In one implementation, the image rendering module 218 can render the 3D output image based on the input image and the depth values calculated by the depth map generation module 216 at block 630. In one embodiment, the rendering module 218 can generate a 3D image using Depth Image Based Rendering (DIBR) techniques. In one embodiment, the 3D output image comprises a second image to be used in conjunction with the input image to form a stereoscopic pair. According to an embodiment, the second image may be a left image or a right image and may be generated by the image rendering module 218. The first and second images together may form a 3D output image. The 3D output image may be stored or displayed for viewing by a user.

Fig. 7 is a flow diagram illustrating a method for computing a feature-to-depth mapping function according to an embodiment of the present disclosure. Method 700 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware emulation), or a combination thereof. The method 700 can create a feature-to-depth mapping function that is used to convert monoscopic visual content to stereoscopic 3D. In one embodiment, method 700 may be performed by image converter 110 as shown in FIG. 1.

Referring to FIG. 7, at block 710, method 700 identifies one or more visually similar images from image data store 222. In one embodiment, the data store 222 may include a library of 3D images and/or videos for which feature information (e.g., color) and depth values are known. In one embodiment, the library includes millions of images and/or videos. Any number of techniques may be used to identify visually similar images, such as fingerprints, K nearest neighbors, and the like. At block 720, the method 700 identifies a number of merge blocks to use for a given color space. Binning blocks may be used to reduce the number of colors considered. In one embodiment where the color space is YCbCr, the space may be divided into binning blocks used separately on Y, Cb and Cr. In one embodiment, the Y component is divided into 32 binning blocks, while the Cb and Cr components are each divided into 16 binning blocks, resulting in a total of 8192 binning blocks. In other embodiments, some other number of merge blocks and/or some other color space may be used.

At block 730, the method 700 associates the pixels in the example similar image identified at block 710 with the merge block identified at block 720. For example images from the image data store 222, the feature processing module 212 can associate each pixel with an appropriate binning block based on the color value of the pixel. Each binning block may include pixels of the same or similar color (or other relevant characteristics).

At block 740, the method 700 determines a depth value for each color binning block. For example, there may be multiple depth values for each (Y, Cb, Cr) binning block. To generate the feature-to-depth mapping function, only one depth value for each (Y, Cb, Cr) binning block may be used. All depth values in one binning block can thus be combined into a single depth value. In one embodiment, the combined depth value for the merged block is simply the average of all example depths to which the map is mapped. In another embodiment, the combined depth is a median value to all example depths mapped thereto. In yet another embodiment, a RANSAC (random sample consensus) method is used to detect outliers in depth values and the mean of the normal values (inliers) is taken as output. Assuming that the total number of depth values is N, for each iteration of RANSAC, a random subset of depth values is selected (N1) and its mean is calculated. The difference between the mean depth and all N depth values is calculated. Those having a difference less than the threshold may be included in the normal value set. The iteration may stop when the normal set of values does not change or a maximum number of iterations is reached. If the size of the set of normal values is larger than M (which may be specified as a percentage of N), the set of normal values is considered valid and its mean value may be used as a summary of all depth values.

At block 750, the method 700 calculates a weight value for each merge block. The feature processing module 212 can calculate a weight for each merge block that is a function of the occupancy ratio within the merge block and the average occupancy of any merge block. Thus, if there are N pixels in the example image, and the total number of merged blocks is B, the occupation ratio of the merged block with N hits is N/(BN). The weight of each merged block may be a function of the occupancy ratio, i.e., w (n/(BN)). In one embodiment, the function is w (x) ═ 1-exp (-kx), where k is chosen such that w (x) is small for small occupancy ratios, but quickly approaches 1 for non-small occupancy. In another embodiment, the weighting function is equal to 0 for small values of x < and 1 otherwise, in order to reject binning blocks that may have very small occupation due to noise.

At block 760, the method 700 generates a feature-to-depth mapping function. The feature-to-depth mapping module 214 can perform a form of polynomial fitting (e.g., curve fitting) to generate the feature-to-depth mapping function 400. The resulting function may be expressed as f (color) depth, or in the case of YUV color, as f (YUV) depth. Thus, the depth value for a given pixel may be calculated as a function of the YUV color value for that pixel. The feature-to-depth mapping module 214 may also smooth the mapping function. Since the mapping function may be noisy, the module 214 can smooth the mapping function by convolving it with a predefined smoothing kernel. For example, merge block j may be reassigned a weighted average of its neighboring merge blocks, where the weight is the product of the smoothing kernel and the occupancy-based weight calculated above at block 750. In one embodiment, the final depth (D) of the merge block j may be calculated as:

in this equation, n (j) represents the neighbor of the merge block. The feature-to-depth mapping function may be stored, for example, in a data store such as database 224.

Fig. 8A is a flow diagram illustrating a method for applying a feature-to-depth mapping function to an input image according to an embodiment of the present disclosure. Method 800 may be performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware emulation), or a combination thereof. The method 800 can apply a feature-to-depth mapping function to an input image to convert the input image to stereoscopic 3D. In one embodiment, method 800 may be performed by image converter 110 as shown in FIG. 1.

Referring to fig. 8A, at block 810, method 800 receives a 2D input image to be converted into 3D. At block 820, the method 800 identifies a feature (e.g., a color value) of each pixel in the input image. In one embodiment, the feature data may be stored in metadata associated with the image, and the feature processing module 212 may be capable of determining feature values, such as color values, from the metadata.

At block 830, the method 800 calculates a depth for each pixel in the input image based on the feature-to-depth mapping function determined from the approximate depth map (e.g., as discussed above in connection with fig. 7). In one embodiment, the depth map generation module 216 can perform n linear interpolation in the feature space to obtain the depth of the query pixel. For example, in the one-dimensional case, the two binning block centers around the actual value of the query pixel may be denoted as C1 and C2. The weighted depth values for those binning blocks may be D1 and D2, respectively. The depth map generation module 216 can calculate the distances between the query pixel and C1 and C2 and represent them with d1 and d2, respectively. The depth map generation module 216 can use the distance as a weight to insert a depth value between the two binning blocks using a formula such as:

fig. 8B is a diagram illustrating one-dimensional linear interpolation between color-depth binning blocks according to one embodiment. The inserted value 852 may be returned as a depth value for the query pixel. Using the histogram representation of the feature-to-depth map 850, the approximate map is not smooth. The depth value may change abruptly near the merge block boundary. In one embodiment, interpolation may be used on the approximated mapping. In one embodiment, a three-line (trilinear) interpolation of the mapping may be used, since the 3D mapping function f (y, u, v) ═ D is already present. Since the features tend to conform to the contours of objects in the image, good partitioning of the depth map that conforms to the boundaries of the objects can be achieved with this method.

Referring again to fig. 8A, at block 840, the method 800 generates a stereo pair of 3D output images. In one embodiment, the image rendering module 218 can render the 3D output image based on the input image and the depth values calculated by the depth map generation module 216 at block 830. In one embodiment, the rendering module 218 can generate a 3D image using Depth Image Based Rendering (DIBR) techniques. The output image may be stored or displayed for viewing by a user.

Fig. 9 illustrates a diagrammatic representation of a machine in the exemplary form of a computer system 900 within which a set of instructions, for causing the machine to perform one or more of the methodologies discussed herein, may be executed. In alternative implementations, the machine may be connected (e.g., networked) to other machines in a Local Area Network (LAN), an intranet, an extranet, or the internet. The machine may operate in the capacity of a server or a client machine in a client-server environment, or as a peer computer in a peer-to-peer (or distributed) network environment. The machine may be a Personal Computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a mobile telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Additionally, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. In one embodiment, computer system 900 may represent a server, such as server 102 running image converter 110.

Exemplary computer system 900 includes a processing device 902, a main memory 904 (e.g., Read Only Memory (ROM), flash memory, Dynamic Random Access Memory (DRAM) such as synchronous DRAM (sdram) or Rambus DRAM (RDRMA), etc.), a static memory 906 (e.g., flash memory, Static Random Access Memory (SRAM), etc.), and a data storage device 918, which communicate with each other via a bus 903. Any of the signals provided over the various buses described herein may be time multiplexed with other signals and provided over one or more common buses. Further, the interconnections between circuit components or blocks may be shown as buses or as separate signal lines. Each bus may alternatively be one or more separate signal lines and each separate signal line may alternatively be a bus.

Processing device 902 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be a Complex Instruction Set Computing (CISC) microprocessor, Reduced Instruction Set Computing (RISC) microprocessor, Very Long Instruction Word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device 902 can also be one or more special-purpose processing devices such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), network processor, or the like. Processing device 902 is configured to execute the processing logic 926 for performing the operations and steps discussed herein.

The computer system 900 may further include a network interface device 908. Computer system 900 may also include a video display unit 910 (e.g., a Liquid Crystal Display (LCD) or a Cathode Ray Tube (CRT), an alphanumeric input device 912 (e.g., a keyboard), a cursor control device 914 (e.g., a mouse), and a signal generation device 916 (e.g., a speaker).

The data storage device 918 may include a machine-readable storage medium 928 on which is stored one or more sets of instructions 922 (e.g., software) embodying any one or more of the methodologies of functionality described herein. The instructions 922 may also reside, completely or at least partially, within the main memory 904 and/or within the processor device 902 during execution thereof by the computer system 900; the main memory 904 and the processor device 902 also constitute machine-readable storage media. The instructions 922 may further be transmitted or received over a network 920 via the network interface device 908.

The machine-readable storage medium 928 may also be used to store instructions to perform methods for generating a depth map for an input image using an example approximate depth map associated with an example similar image as described herein. While the machine-readable storage medium 928 is shown in an exemplary embodiment to be a single medium, the term "machine-readable storage medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. A machine-readable medium includes any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The machine-readable medium may include, but is not limited to, magnetic storage media (e.g., floppy diskettes); optical storage media (e.g., CD-ROM), magneto-optical storage media; read Only Memory (ROM); random Access Memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM), flash memory, or other types of media for storing electronic instructions.

The foregoing description has given numerous specific details such as examples of specific systems, components, methods, etc., in order to provide a thorough understanding of the various aspects of the present disclosure. It will be apparent, however, to one skilled in the art that at least some embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known components or methods are not described in detail or are presented in simple block diagram form in order to avoid unnecessarily obscuring the present disclosure. Accordingly, the specific details given are merely exemplary. The particular embodiments and these illustrative details may be varied and still be contemplated to be within the scope of the present disclosure.

Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the term "or" is intended to mean an inclusion "or" rather than an exclusion "or". Moreover, the word "example" or "exemplary" is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the word "example" or "exemplary" is intended to present concepts in a concrete fashion.

Although the operations of the methods are illustrated and described herein in a particular order, the order of the operations of each method may be changed such that some operations may be performed in a reverse order or some operations may be performed at least partially concurrently with other operations. In another embodiment, instructions or sub-operations of different operations may be performed in an intermittent and/or alternating manner. Moreover, not all illustrated acts may be required to implement various aspects of the disclosed subject matter. Further, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methodologies disclosed in this specification are capable of being stored on an article of manufacture to facilitate transporting or transmitting such methodologies to computing devices. The term "article of manufacture" as used herein is intended to encompass a computer program accessible from any computer-readable device or storage media.

25页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:电子装置及其对象测量方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!