Light field camera depth estimation system and method based on polar image color difference

文档序号:951963 发布日期:2020-10-30 浏览:23次 中文

阅读说明:本技术 一种基于极面图像颜色差异的光场相机深度估测系统及方法 (Light field camera depth estimation system and method based on polar image color difference ) 是由 盛浩 崔正龙 杨达 王思哲 周建伟 于 2020-07-03 设计创作,主要内容包括:本发明涉及一种基于极面图像颜色差异的光场相机深度估测系统及方法,本发明所述系统包括以下4大模块:极面图像提取模块、颜色差异计算模块、颜色差异整体性优化模块、深度估测可视化输出模块。本发明主要完成光场相机的极面图像生成、深度信息估测、三维场景深度图融合输出等功能。系统可根据光场相机深度粒度参数和输出格式要求,对深度估测结果图进行优化,自动输出符合需求的拍摄场景深度估测结果图像。(The invention relates to a light field camera depth estimation system and a method based on polar image color difference, wherein the system comprises the following 4 modules: the device comprises a polar image extraction module, a color difference calculation module, a color difference integrity optimization module and a depth estimation visualization output module. The method mainly achieves the functions of polar image generation, depth information estimation, three-dimensional scene depth image fusion output and the like of the light field camera. The system can optimize the depth estimation result image according to the depth granularity parameter of the light field camera and the requirement of an output format, and automatically output the shooting scene depth estimation result image which meets the requirement.)

1. A light field camera depth estimation system based on polar image color difference is characterized by comprising a polar image extraction module, a color difference calculation module, a color difference integrity optimization module and a depth estimation visual output module;

the polar image extraction module is responsible for extracting a horizontal polar image and a vertical polar image from an original light field image obtained by shooting; acquiring ImageLF of original light field image and parameter information P of original light field imageImageLFAccording to PImageLFSetting extraction parameter P of polar imageEpiAccording to PEpiRespectively extracting a horizontal direction polar surface image Epi _ h and a vertical direction polar surface image Epi _ v, and transmitting the images as input to the color difference calculation module;

the color difference calculation module is responsible for finishing the function of calculating the color difference of two sides of a straight line on the horizontal direction polar face image and the vertical direction polar face image which are transmitted by the polar face image extraction module and represent the labels with different depths; setting the number NumD of the depth labels, constructing a corresponding function of the depth labels and the parallax values, and constructing a linear two-side color difference value convolution kernel Filter under the depth labels theta for each depth label theta θUtilizing the constructed convolution kernel Filter of the color difference values on two sides of the straight line under the depth label thetaθRespectively performing convolution operation and summation on each color channel on the horizontal direction polar surface image Epi _ h and the vertical direction polar surface image Epi _ v to sequentially obtain color difference results Diff _ h on two sides of the horizontal direction straight line of each pixel point under the depth label thetaθ(i, j) results of color differences Diff _ v from both sides of a vertical lineθ(i, j) and transmitting the input to a difference value integrity optimization module;

the color difference integrity optimization module is responsible for finishing the two horizontal straight lines of each pixel point under different depth labels obtained by the color difference calculation moduleThe side color difference result and the color difference result on two sides of the straight line in the vertical direction have the function of integral optimization; respectively calculating the horizontal direction credibility DiffCov _ h of each pixel point on the horizontal direction polar face image Epi _ h and the vertical direction credibility DiffCov _ v of each pixel point on the vertical direction polar face image Epi _ v, and comparing Diff _ hθ(i, j) and Diff _ vθ(i, j) carrying out weighted summation integration optimization to obtain the optimized color difference results DiffOpt on two sides of the straight line of each pixel point under the depth label theta θ(i, j) and as input, transmitting the depth estimation to a depth estimation visualization output module;

the depth estimation visual output module is responsible for finishing the function of visually outputting the optimized color difference results on two sides of the straight line of each pixel point under different depth labels transmitted by the color difference integrity optimization module; aiming at each pixel point p (i, j), calculating DiffOpt under different depth labels thetaθAnd (5) taking the Depth label corresponding to the maximum value of the (i, j) as a Depth estimation label Depth (i, j), and projecting the Depth estimation label Depth into black and white pixel points of colors (i, j) with different brightness values according to the Depth estimation label Depth and a corresponding function of the Depth label and the parallax value, wherein the black and white pixel points are expressed as single-channel binary image visualization output of the Depth estimation label distribution.

2. The system for estimating the depth of the light field camera based on the color difference of the polar image according to claim 1, wherein in the polar image extraction module, the parameter information of the original light field image comprises a dimension length ImageH and a dimension width ImageW of the original light field image, a lens array dimension length CapH and a dimension width CapW of the original light field image, and the number of color channels of the original light field image; extracting parameter P of the polar image EpiIncluding the dimension length and width of the pole face image, and the number of color channels of the pole face image.

3. The light field camera depth estimation system based on the color difference of the polar image according to claim 1, wherein the specific implementation of the polar image extraction module responsible for extracting the horizontal polar image and the vertical polar image from the original captured light field image includes:

1) original light field image acquisition: shooting and acquiring original light field image ImageLF of target scene and parameter information P of original light field image by using light field cameraImageLFThe ImageLF is expressed as a three-dimensional matrix comprising three dimensions, namely dimension length ImageH, dimension width ImageW and color channel number of the original light field image;

2) polar surface image extraction: calculating the size length EpiH _ h and the size width EpiW _ h of the polar surface image in the horizontal direction, the size length EpiH _ v and the size width EpiW _ v of the polar surface image in the vertical direction according to the size length ImageH and the size width ImageW of the original light field image, the size length CapH and the size width CapW of the lens array of the original light field image,

EpiH_h=ImageH/CapH,

EpiW_h=ImageW,

EpiH_v=ImageH,

EpiW_v=ImageW/CapW,

further obtaining a horizontal direction polar surface image Epi _ h and a vertical direction polar surface image Epi _ v,

Epi_h=ImageLF[1:CapH:ImageH,:,:],

Epi_v=ImageLF[:,1:CapW:ImageW,:]。

4. The light field camera depth estimation system according to claim 1, wherein the color difference calculation module is responsible for performing a function of calculating color differences on two sides of a straight line representing labels of different depths on the horizontal polar image and the vertical polar image transmitted by the polar image extraction module, and the function includes:

1) constructing a corresponding function of the depth label and the parallax value: calculating to obtain a parallax value corresponding to each depth label according to the parallax value range of the original light field image ImageLF and the set number NumD of the depth labels, uniformly distributing the depth labels to the parallax values, wherein the parallax value corresponding to the depth label theta is as follows:

Dis_d=Dis_min+(Dis_max-Dis_min)/NumD*θ,

wherein Dis _ min is the minimum value of the parallax value, and Dis _ max is the maximum value of the parallax value;

2) constructing a color difference value convolution kernel on two sides of the straight line: for each depth label theta, extracting a parameter P according to the polar surface imageEpiAnd parameter information P of the original light field imageImageLFStraight lines L representing depth labels theta on the horizontal direction polar plane image Epi _ h and the vertical direction polar plane image Epi _ v using Gaussian functions, respectivelyθIs centered to LθTwo-side diffusion, namely building a linear two-side color difference value convolution kernel Filter under the depth label theta θDistribution FilterθInternal weight parameter of (2):

wherein, Wθ(i, j) represents the weight of the pixel point (i, j) under the depth label theta, and the pixel point (i, j) is a straight line L from the horizontal directionθIs i pixels, from the vertical direction to the line LθC is a constant term, dθ(i, j) is a distance straight line L between the pixel point (i, j)θA is adjusting FilterθA constant term of the sensitivity degree, wherein e is a natural constant;

3) calculating the color difference result of two sides of the straight line: filter using linear two-side color difference convolution kernel under depth label thetaθRespectively executing convolution operation on the horizontal direction polar face image Epi _ h and the vertical direction polar face image Epi _ v, wherein the starting point of the convolution operation is a pixel point at the upper left corner of the Epi _ h or the Epi _ v, the step length of the convolution operation is 1 pixel each time, the horizontal direction moves 1 pixel to the right until the right edge of the Epi _ h or the Epi _ v is reached, and a Filter is utilizedθRespectively carrying out quantity product multiplication and summation with pixel values on each color channel on the Epi _ h or the Epi _ v to obtain color difference results Diff _ h on two sides of a horizontal straight line of each pixel point p (i, j) on the Epi _ h under the depth label thetaθ(i, j) and each pixel point p on Epi _ v(i, j) vertical direction straight line both side color difference result Diff _ v θ(i, j) wherein θ1、θ2、θ3Representing three depth labels, theta1For correct depth label, θ2And theta3A is a convolution kernel Filter for adjusting color difference on two sides of a straight lineθA constant term for the degree of sensitivity.

5. The light field camera depth estimation system based on polar image color difference according to claim 1, wherein the color difference overall optimization module is responsible for implementing a function of performing overall optimization on color difference results on two sides of a horizontal straight line and color difference results on two sides of a vertical straight line of each pixel point under different depth labels obtained by the color difference calculation module, and specifically implements the function of performing overall optimization on the color difference results on two sides of the horizontal straight line and the color difference results on two sides of the vertical straight line, and includes:

1) calculating the reliability of the color difference result of two sides of the horizontal straight line: aiming at the color difference result Diff _ h of each pixel point p (i, j) of the horizontal pole face image Epi _ h on two sides of the horizontal straight line under different depth labels thetaθ(i, j), extracting the color difference result Diff _ h on two sides of the horizontal straight line corresponding to the central visual angleθ(i, j), calculating the mean value DiffAvg _ h (i, j) and the maximum value DiffMax _ h (i, j) of the color difference results on the two sides of the horizontal direction straight line of each pixel point p (i, j) under different depth labels theta, and further calculating the reliability of the color difference results on the two sides of the horizontal direction straight line of the pixel point (i, j):

DiffCov_h(i,j)=DiffAvg_h(i,j)/DiffMax_h(i,j);

2) Calculating the reliability of the color difference result of two sides of the vertical straight line: aiming at the color difference results Diff _ v of two sides of the vertical direction straight line of each pixel point p (i, j) of the vertical direction polar surface image Epi _ h under different depth labels thetaθ(i, j), extracting the color difference result Diff _ v of two sides of the vertical straight line corresponding to the central visual angleθ(i, j), calculating the mean value DiffAvg _ v (i, j) and the maximum value DiffMax _ v (i, j) of the color difference results of the two sides of the vertical direction straight line of each pixel point p (i, j) under different depth labels theta, and further calculating the vertical value of the pixel point p (i, j)And (3) the reliability of the color difference result of two sides of the straight line in the direction:

DiffCov_v(i,j)=DiffAvg_v(i,j)/DiffMax_v(i,j);

3) calculating the color difference result of two sides of the integral optimization straight line: for each pixel point p (i, j) under different depth labels theta, according to the color difference result credibility DiffCov _ h (i, j) at two sides of the straight line in the horizontal direction and the color difference result credibility DiffCov _ v (i, j) at two sides of the straight line in the vertical direction, the color difference result Diff _ h at two sides of the straight line in the horizontal directionθ(i, j) results of color differences Diff _ v from both sides of a vertical lineθ(i, j) carrying out weighted summation integration optimization to obtain the optimized color difference result on two sides of the straight line:

6. the system according to claim 5, wherein the central view angle is the view angle of the middle line of the lens array dimension length and dimension width.

7. The system for light field camera depth estimation based on color difference of polar image according to claim 1, wherein in the depth estimation visualization output module, for pixel point (i, j), the brightness value:

Colorti,j)=Depth(i,j)*NumD/255。

8. a light field camera depth estimation method based on polar image difference maximization is characterized by specifically comprising the following steps:

1) obtaining original light field image ImageLF and parameter information P of original light field image by using light field cameraImageLFAccording to PImageLFSetting extraction parameter P of polar imageEpiAccording to PEpiRespectively extracting a horizontal direction polar surface image Epi _ h and a vertical direction polar surface image Epi _ v;

2) is provided withThe number NumD of the depth-fixing labels is constructed, a corresponding function of the depth labels and the parallax value is constructed, and for each depth label theta, a convolution kernel Filter for color difference values on two sides of a straight line under the depth label theta is constructedθAnd (5) utilizing a linear two-side color difference value convolution kernel Filter under the constructed depth label theta'θRespectively performing convolution operation on the horizontal direction polar surface image Epi _ h and the vertical direction polar surface image Epi _ v to sequentially obtain color difference results Diff _ h on two sides of a horizontal direction straight line of each pixel point under the depth label thetaθ(i, j) results of color differences Diff _ v from both sides of a vertical line θ(i,j);

3) Respectively calculating the horizontal direction credibility DiffCov _ h of each pixel point on the horizontal direction polar face image Epi _ h and the vertical direction credibility DiffCov _ v of each pixel point on the vertical direction polar face image Epi _ v, and comparing Diff _ hθ(i, j) and Diff _ vθ(i, j) carrying out weighted summation integration optimization to obtain the optimized color difference results DiffOpt on two sides of the straight line of each pixel point under the depth label thetaθ(i,j);

4) Aiming at each pixel point p (i, j), calculating DiffOpt under different depth labels thetaθAnd (5) taking the Depth label corresponding to the maximum value of the (i, j) as a Depth estimation label Depth (i, j), and projecting the Depth estimation label Depth into black and white pixel points of colors (i, j) with different brightness values according to the Depth estimation label Depth and a corresponding function of the Depth label and the parallax value, wherein the black and white pixel points are expressed as single-channel binary image visualization output of the Depth estimation label distribution.

Technical Field

The invention relates to a light field camera depth estimation system and method based on polar image color difference, in particular to a depth estimation system and method for a light field camera based on automatic depth estimation of a specific linear structure on a polar image and visual output of a depth distribution diagram, and belongs to the field of camera imaging and scene structure reconstruction.

Background

The light field camera is applied to the work of three-dimensional reconstruction of a scene structure and the like, however, the three-dimensional scene reconstruction by using a light field image firstly needs to estimate the depth of a scene, and objects with different depths of field are distinguished and segmented, so that the accuracy of depth estimation becomes an important prepositive work influencing the application of the light field.

For depth estimation of light field images, two methods can be mainly used, one is a depth estimation algorithm [1, 2] based on stereo matching, and the other is a depth estimation algorithm [3, 4] based on polar images. Chen 1 and Yu 2 propose depth estimation algorithm based on stereo matching, whose main idea is to extract the imaging of the same scene point in multiple lenses, select the depth corresponding to the minimum matching cost as the depth estimation result, which has the advantages of effectively utilizing the imaging images in multiple lenses, fully utilizing the characteristics of multiple lenses of a light field camera, comparing the imaging difference of multiple angles, thus obtaining the depth with the minimum matching cost as the depth estimation result by matching, but the disadvantage is very obvious, when estimating the texture area and the occlusion area, the stereo matching will have a very large error, because if the texture area, the large area of images will show a single color, and if the stereo matching algorithm, the matching result between the similar colors of the large area will show a very small difference, thus the depth label with the minimum matching cost is difficult to compare, therefore, the depth estimation result is greatly influenced, and for a sheltered area, the angle sampling image at the correct depth is a group of pixel points with different colors, so that in the depth estimation algorithm based on stereo matching, the matching cost at the correct depth is increased when the depth of the sheltered area is estimated, and the minimum value of the matching cost is obtained at the wrong depth, so that the depth estimation result has serious errors; wanner [3, 4] has proposed the depth estimation method to the pole face picture, the technology can avoid the similar problem better, the depth estimation algorithm based on pole face picture its main thought is to match and get the linear structure on the pole face picture and thus achieve the goal of depth estimation, because the characteristic of the scene is observed in the continuous angle of the light field camera, make the pixel point observed form the straight line structure with different slopes on the pole face picture because of the depth difference, according to the linear structural characteristic on the pole face picture, can reach the goal of estimating the depth of the scene by discerning the linear structure, compare with the depth estimation algorithm based on matching stereoscopically, the depth estimation based on pole face picture uses the image of less angles but has stronger anti-interference, especially have better depth estimation effect to the sheltered area, but because of the image information used is insufficient, the accuracy thereof still needs to be improved.

Currently, depth estimation and scene reconstruction of a light field image have achieved related research results, but there are some technical difficulties for being applied to an actual light field camera, wherein the depth estimation algorithm is required to have high operation efficiency, can quickly respond to a user shot image and estimate, and is required to deal with most of real complex scenes with uncertain noise, so that the robustness is high. How to apply a fast and accurate depth estimation algorithm to a light field camera is a primary problem at present. When a light field camera works in an unconventional structured environment and noise which is difficult to predict often appears in a shooting scene, a depth estimation method which is robust to blocking noise needs to be found in addition to the problem of calculation speed of the algorithm when the depth estimation algorithm is selected.

【1】Chen C,LinH,Yu Z,et al.Light field stereo matching usingbilateral statistics of surface cameras[C]//Proceedings of the IEEEConference on Computer Vision and Pattern Recognition.2014:1518-1525.

【2】Yu Z,Guo X,Lin H,et al.Line assisted light field triangulation andstereo matching[C]//Proceedings of the IEEE International Conference onComputer Vision.2013:2792-2799.

【3】Wanner S,Goldluecke B.Globally consistent depth labeling of 4Dlight fields[C]//2012 IEEE Conference on Computer Vision and PatternRecognition.IEEE,2012:41-48.

【4】Wanner S,Goldluecke B.Variational light field analysis fordisparity estimation and super-resolution[J].IEEE transactions on patternanalysis and machine intelligence,2013,36(3):606-619.

【5】Zhang S,Sheng H,Li C,et al.Robust depth estimation for light fieldvia spinning parallelogram operator[J].Computer Vision and ImageUnderstanding,2016,145:148-159.

In summary, the prior art has the following technical disadvantages:

1) the existing algorithm has slow calculation speed;

2) the existing algorithm is sensitive to noise and low in robustness.

Disclosure of Invention

The technical problem of the invention is solved: in order to overcome the technical defects that the light field camera in the prior art is easily interfered by noise and is insensitive to color difference, the invention provides a light field camera depth estimation system and method based on the color difference of polar images, so as to ensure that the depth estimation is carried out on a shooting scene with high efficiency and insensitive to noise.

The technical solution of the invention is as follows:

the invention provides a light field camera depth estimation system based on polar image color difference, which comprises a polar image extraction module, a color difference calculation module, a color difference integrity optimization module and a depth estimation visual output module;

the polar image extraction module is responsible for extracting a horizontal polar image and a vertical polar image from an original light field image obtained by shooting; acquiring ImageLF of original light field image and parameter information P of original light field imageImageLFAccording to PImageLFSetting extraction parameter P of polar imageEpiAccording to PEpiSeparately extract waterThe horizontal polar surface image Epi _ h and the vertical polar surface image Epi _ v are used as input and transmitted to the color difference calculation module;

the color difference calculation module is responsible for finishing the function of calculating the color difference of two sides of a straight line on the horizontal direction polar face image and the vertical direction polar face image which are transmitted by the polar face image extraction module and represent the labels with different depths; setting the number NumD of the depth labels, constructing a corresponding function of the depth labels and the parallax values, and constructing a linear two-side color difference value convolution kernel Filter under the depth labels theta for each depth label theta θUtilizing the constructed convolution kernel Filter of the color difference values on two sides of the straight line under the depth label thetaθRespectively performing convolution operation and summation on each color channel of the horizontal direction polar surface image Epi _ h and the vertical direction polar surface image Epi _ v to sequentially obtain color difference results Diff _ h on two sides of the horizontal direction straight line of each pixel point under the depth label thetaθ(i, j) results of color differences Diff _ v from both sides of a vertical lineθ(i, j) and transmitting the input to a difference value integrity optimization module;

the color difference overall optimization module is responsible for completing the function of performing overall optimization on the color difference results on the two sides of the horizontal direction straight line and the color difference results on the two sides of the vertical direction straight line of each pixel point under different depth labels, which are obtained by the color difference calculation module; respectively calculating the horizontal direction credibility DiffCov _ h of each pixel point on the horizontal direction polar face image Epi _ h and the vertical direction credibility DiffCov _ v of each pixel point on the vertical direction polar face image Epi _ v, and comparing Diff _ hθ(i, j) and Diff _ vθ(i, j) carrying out weighted summation integration optimization to obtain the optimized color difference results DiffOpt on two sides of the straight line of each pixel point under the depth label thetaθ(i, j) and as input, transmitting the depth estimation to a depth estimation visualization output module;

The depth estimation visual output module is responsible for finishing the function of visually outputting the optimized color difference results on two sides of the straight line of each pixel point under different depth labels transmitted by the color difference integrity optimization module; for each pixel point p (i, j), find noneDiffOpt under same depth label thetaθAnd (5) taking the Depth label corresponding to the maximum value of the (i, j) as a Depth estimation label Depth (i, j), and projecting the Depth estimation label Depth into black and white pixel points of colors (i, j) with different brightness values according to the Depth estimation label Depth and a corresponding function of the Depth label and the parallax value, wherein the black and white pixel points are expressed as single-channel binary image visualization output of the Depth estimation label distribution.

Further, in the polar image extraction module, the parameter information P of the original light field imageImageLFThe method comprises the steps of measuring the length and the width of the size of an original light field image, the length and the width of the size of a lens array of the original light field image and the number of color channels of the original light field image; extracting parameter P of the polar imageEpiIncluding the dimension length and width of the pole face image, and the number of color channels of the pole face image.

Further, the specific implementation of the function of extracting the horizontal polar image and the vertical polar image from the original captured light field image by the polar image extraction module includes:

1) Original light field image acquisition: shooting and acquiring original light field image ImageLF of target scene and parameter information P of original light field image by using light field cameraImageLFThe image LF is expressed as a three-dimensional matrix comprising three dimensions, wherein the three dimensions are the size length, the size width and the number of color channels of an original light field image respectively, and the parameter information of the original light field image comprises the size length ImageH and the size width ImageW of the original light field image, the size length CapH and the size width CapW of a lens array of the original light field image and the number of color channels of the original light field image;

2) polar surface image extraction: calculating the size length EpiH _ h and the size width EpiW _ h of the polar surface image in the horizontal direction, the size length EpiH _ v and the size width EpiW _ v of the polar surface image in the vertical direction according to the size length ImageH and the size width ImageW of the original light field image, the size length CapH and the size width CapW of the lens array of the original light field image,

EpiH_h=ImageH/CapH,

EpiW_h=ImageW,

EpiH_v=ImageH,

EpiW_v=ImageW/CapW,

further obtaining a horizontal direction polar surface image Epi _ h and a vertical direction polar surface image Epi _ v,

Epi_h=ImageLF[1:CapH:ImageH,:,:],

Epi_v=ImageLF[:,1:CapW:ImageW,:]。

further, the specific implementation of the function of the color difference calculation module, which is responsible for calculating the color difference of two sides of the straight line on the horizontal direction polar face image and the vertical direction polar face image which are transmitted by the polar face image extraction module and represent the straight lines of different depth labels, includes:

1) Constructing a corresponding function of the depth label and the parallax value: calculating to obtain a parallax value corresponding to each depth label according to the parallax value range of the original light field image ImageLF and the set number NumD of the depth labels, uniformly distributing the depth labels to the parallax values, wherein the parallax value corresponding to the depth label theta is as follows:

Dis_d=Dis_min+(Dis_max-Dis_min)/NumD*θ,

wherein Dis _ min is the minimum value of the parallax value, and Dis _ max is the maximum value of the parallax value;

2) constructing a color difference value convolution kernel on two sides of the straight line: for each depth label theta, extracting a parameter P according to the polar surface imageEpiAnd parameter information P of the original light field imageImageLFStraight lines L representing depth labels theta on the horizontal direction polar plane image Epi _ h and the vertical direction polar plane image Epi _ v using Gaussian functions, respectivelyθIs centered to LθTwo-side diffusion, namely building a linear two-side color difference value convolution kernel Filter under the depth label thetaθDistribution FilterθInternal weight parameter of (2):

wherein, Wθ(i, j) represents the weight of the pixel point (i, j) under the depth label theta, and the pixel point (i, j) is a straight line L from the horizontal directionθIs i pixels, from the vertical direction to the line LθC is a constant term, dθ(i, j) is a distance straight line L between the pixel point (i, j) θA is adjusting FilterθA constant term of the sensitivity degree, wherein e is a natural constant;

3) calculating the color difference result of two sides of the straight line: filter using linear two-side color difference convolution kernel under depth label thetaθRespectively executing convolution operation on the horizontal direction polar face image Epi _ h and the vertical direction polar face image Epi _ v, wherein the starting point of the convolution operation is a pixel point at the upper left corner of the Epi _ h or the Epi _ v, the step length of the convolution operation is 1 pixel each time, the horizontal direction moves 1 pixel to the right until the right edge of the Epi _ h or the Epi _ v is reached, and a Filter is utilizedθRespectively carrying out quantity product multiplication and summation with pixel values on each color channel on the Epi _ h or the Epi _ v to obtain color difference results Diff _ h on two sides of a horizontal straight line of each pixel point p (i, j) on the Epi _ h under the depth label thetaθ(i, j) and the color difference result Diff _ v of two sides of the vertical direction straight line of each pixel point p (i, j) on Epi _ vθ(i, j). As shown in FIG. 2, where θ1、θ2、θ3Representing three depth labels, theta1For correct depth label, θ2And theta3A is a convolution kernel Filter for adjusting color difference on two sides of a straight lineθThe constant term of the sensitivity, 3a, is embodied in the graph as a color difference convolution kernel Filter on two sides of a straight line θHalf of the width of (1), three parallelogram frames being linear two-side color difference convolution kernel filtersθCoverage area diagram, θ1The color difference convolution kernels on two sides of the straight line at the label position are convolution kernels under the correct depth theta2And theta3And the label part represents a convolution kernel under the error depth, and the convolution operation is carried out on the pixels on the polar image through the color difference convolution kernels on the two sides of the straight line under the labels with different depths, so that the color difference on the two sides of the convolution kernels under the labels with different depths is calculated.

Further, the color difference integrity optimization module is responsible for completing the specific implementation of the function of performing integrity optimization on the color difference results on two sides of the horizontal direction straight line and the color difference results on two sides of the vertical direction straight line of each pixel point under different depth labels obtained by the color difference calculation module, and comprises the following steps:

(1) calculating the reliability of the color difference result of two sides of the horizontal straight line: aiming at the color difference result Diff _ h of each pixel point p (i, j) of the horizontal pole face image Epi _ h on two sides of the horizontal straight line under different depth labels thetaθ(i, j), extracting the color difference result Diff _ h on two sides of the horizontal straight line corresponding to the central visual angleθ(i, j), wherein the central visual angle is a visual angle of a middle line of the size length and the size width of the lens array, the mean value DiffAvg _ h (i, j) and the maximum value DiffMax _ h (i, j) of color difference results of two sides of the horizontal straight line of each pixel point p (i, j) under different depth labels theta are calculated, and then the reliability of the color difference results of two sides of the horizontal straight line of the pixel point (i, j) is calculated:

DiffCov_h(i,j)=DiffAvg_h(i,j)/DiffMax_h(i,j);

2) Calculating the reliability of the color difference result of two sides of the vertical straight line: aiming at the color difference results Diff _ v of two sides of the vertical direction straight line of each pixel point p (i, j) of the vertical direction polar surface image Epi _ h under different depth labels thetaθ(i, j), extracting the color difference result Diff _ v of two sides of the vertical straight line corresponding to the central visual angleθ(i, j), wherein the central visual angle is a visual angle of a middle line of the size length and the size width of the lens array, the mean value DiffAvg _ v (i, j) and the maximum value DiffMax _ v (i, j) of color difference results of two sides of the vertical direction straight line of each pixel point p (i, j) under different depth labels theta are calculated, and then the reliability of the color difference results of two sides of the vertical direction straight line of the pixel point (i, j) is calculated:

DiffCov_v(i,j)=DiffAvg_v(i,j)/DiffMax_v(i,j);

3) calculating the color difference result of two sides of the integral optimization straight line: for each pixel point p (i, j) under different depth labels theta, according to the credibility DiffCov _ h (i, j) of the color difference results on two sides of the straight line in the horizontal direction and the credibility DiffCov _ v (i, j) of the color difference results on two sides of the straight line in the vertical direction, the credibility Diff _ v (i, j) of the color difference results on two sides of the straight line in the horizontal direction_hθ(i, j) results of color differences Diff _ v from both sides of a vertical lineθ(i, j) carrying out weighted summation integration optimization to obtain the optimized color difference result on two sides of the straight line:

Further, in the depth estimation visualization output module, for a pixel point (i, j), the brightness value:

Color(i,j)=Depth(i,j)*NumD/255°

in addition, the invention also provides a light field camera depth estimation method based on polar image difference maximization, which specifically comprises the following steps:

1) obtaining original light field image ImageLF and parameter information P of original light field image by using light field cameraImageLFAccording to PImageLFSetting extraction parameter P of polar imageEpiAccording to PEpiRespectively extracting a horizontal direction polar surface image Epi _ h and a vertical direction polar surface image Epi _ v;

2) setting the number NumD of the depth labels, constructing a corresponding function of the depth labels and the parallax values, and constructing a linear two-side color difference value convolution kernel Filter under the depth labels theta for each depth label thetaθAnd (5) utilizing a linear two-side color difference value convolution kernel Filter under the constructed depth label theta'θRespectively performing convolution operation on the horizontal direction polar surface image Epi _ h and the vertical direction polar surface image Epi _ v to sequentially obtain color difference results Diff _ h on two sides of a horizontal direction straight line of each pixel point under the depth label thetaθ(i, j) results of color differences Diff _ v from both sides of a vertical lineθ(i,j);

3) Respectively calculating the horizontal direction credibility DiffCov _ h of each pixel point on the horizontal direction polar face image Epi _ h and the vertical direction credibility DiffCov _ v of each pixel point on the vertical direction polar face image Epi _ v, and comparing Diff _ h θ(i, j) and Diff _ vθ(i, j) carrying out weighted summation integration optimization to obtain the optimized color difference result of two sides of the straight line of each pixel point under the depth label thetaDiffOptθ(i,j);

4) Aiming at each pixel point p (i, j), calculating DiffOpt under different depth labels thetaθAnd (5) taking the Depth label corresponding to the maximum value of the (i, j) as a Depth estimation label Depth (i, j), and projecting the Depth estimation label Depth into black and white pixel points of colors (i, j) with different brightness values according to the Depth estimation label Depth and a corresponding function of the Depth label and the parallax value, wherein the black and white pixel points are expressed as single-channel binary image visualization output of the Depth estimation label distribution.

Compared with the prior art, the invention has the technical advantages that:

(1) the polar image-based color difference maximization algorithm used by the invention has the advantages that the color channel numerical value is used for calculating the color difference, the operation efficiency is high, the calculation consumption resources are less, and the defects of long calculation time consumption and high calculation space occupation of the conventional light field image depth estimation method can be effectively overcome.

(2) Compared with the prior art, the polar image color difference maximization algorithm fully utilizes the characteristic of strong anti-interference capability of the polar image, has high noise robustness, is not easily influenced by environmental noise, and makes up the problem of noise sensitivity in the prior art.

(3) Compared with published articles, the polar image-based color difference maximization algorithm used by the invention does not use Euclidean distance of a color histogram on a polar image to represent color difference, but calculates the difference of numerical values on each color channel, and sequentially obtains color difference results Diff _ h on two sides of a horizontal straight line of each pixel point under a depth label thetaθ(i, j) results of color differences Diff _ v from both sides of a vertical lineθ(i, j), the optimization on the difference value calculation method enables the method of the invention to have stronger sensitivity to color difference, more accurate depth estimation effect on areas with similar colors in the scene, and reduces the occurrence of fuzzy division of the edge areas of the object caused by the similar colors.

(4) The algorithm used in the present invention has been published in the paper "Robust Depth Estimation for light field via plating Parallelogram Operator [5], which, unlike the algorithm described in the paper, in order to apply the theory more closely to the practical application, the invention optimizes the calculation method of the cost value representing the color difference in the color difference calculation module, the euclidean distance of the color histogram of the pixel points on the polar image is used in the original paper as a cost value for representing the color difference, the disadvantage is that it is not sensitive to color information, when areas with similar colors appear in the scene, the algorithm in the original paper will have great error, in the invention, the numerical value of each color channel will be used to calculate the color difference cost value, the method has the advantages of being more sensitive to color difference, improving algorithm efficiency, reducing algorithm complexity and reducing resource consumption during calculation.

In a word, the invention aims at realizing the high-efficiency and shielding noise robust light field camera depth estimation method, so as to realize effective depth estimation after the light field camera shoots a target scene, utilizes the specific linear structure of the polar image and the anti-interference performance to the shielding noise, adopts the color difference maximum strategy to identify, introduces the reliability optimization method to weaken the error, and realizes the noise insensitive depth estimation method under the complex scene environment. The invention provides a depth estimation method based on a polar image color difference maximization algorithm in the technical field of light field images, which does not solve the problem of depth estimation of a light field camera, and solves the problems of long operation time, large resource consumption and easy noise influence in the prior art.

Drawings

FIG. 1 is an overall block diagram of the system of the present invention;

FIG. 2 is a schematic diagram of color difference convolution kernels on two sides of a straight line under different depth labels;

fig. 3 is a comparison of depth estimation results between different methods, where (a) is the central view image of the original light field image, (b) is the correct depth result, (c) is the depth estimation result of the bilateral consistency method, and (d) is the result of the present invention.

Detailed Description

The following further describes embodiments of the system of the present invention with reference to the drawings.

The invention provides a light field camera depth estimation system based on polar image color difference, and in a general system structure diagram of the invention shown in fig. 1, the system mainly comprises four modules, namely: the device comprises a polar image extraction module, a color difference calculation module, a color difference integrity optimization module and a depth estimation visualization output module.

As shown in fig. 1, the method of the present invention: firstly, acquiring an original light field image ImageLF and parameter information P of the original light field image of a target scene through a light field cameraImageLFAnd calculating to obtain the dimension length EpiH _ h and the dimension width EpiW _ h of the horizontal direction polar surface image, the dimension length EpiH _ v and the dimension width EpiW _ v of the vertical direction polar surface image according to the dimension length ImageH and the dimension width ImageW of the original light field image, and the lens array dimension length CapH and the dimension width CapW of the original light field image, and respectively extracting the corresponding horizontal direction polar surface image Epi _ h and the corresponding vertical direction polar surface image Epi _ v as input to be transmitted to the color difference calculation module. In the color difference calculation module, firstly, a corresponding function of the depth label and the parallax value is constructed, and then, a convolution kernel Filter for color difference values on two sides of the straight line under the labels theta with different depths is constructed θUsing convolution kernel Filter of color difference values on two sides of straight lineθRespectively executing convolution operation on the horizontal direction polar surface image Epi _ h and the vertical direction polar surface image Epi _ v to respectively obtain color difference results Diff _ h on two sides of a horizontal direction straight line of each pixel point p (i, j) on the Epi _ h under the depth label thetaθ(i, j) and the color difference result Diff _ v of two sides of the vertical direction straight line of each pixel point p (i, j) on Epi _ vθ(i, j) and passed as input to a color difference integrity optimization module. In the color difference integrity optimization module, the horizontal direction credibility DiffCov _ h of each pixel point on the horizontal direction polar face image Epi _ h and the vertical direction credibility DiffCov _ v of each pixel point on the vertical direction polar face image Epi _ v are respectively calculated and respectively calculated, and the Diff _ h is subjected to Diff _ h pair according to the horizontal direction credibility DiffCov _ h and the vertical direction credibility DiffCov _ vθ(i, j) and Diff _ vθ(i, j) carrying out weighted summation integration optimization, increasing the weight with high reliability, reducing the weight with low reliability, and obtaining the color difference result DiffOpt on two sides of the straight lineθ(i, j) and passed as input to a depth estimation visualization output module. In a depth estimation visual output module, aiming at each pixel point p (i, j), DiffOpt under different depth labels theta is solved θAnd (5) taking the Depth label corresponding to the maximum value of the (i, j) as a Depth estimation label Depth (i, j), projecting the Depth estimation label Depth into black and white pixel points with different brightness values Color (i, j), and displaying the result of Depth estimation by representing the single-channel binary image visualization output of the Depth estimation label distribution.

The specific implementation process of each component module function in the system is as follows:

1. polar image extraction module

(1) Original light field image acquisition: shooting and acquiring original light field image ImageLF of target scene and parameter information P of original light field image by using light field cameraImageLFThe image LF is expressed as a three-dimensional matrix comprising three dimensions, wherein the three dimensions are the size length, the size width and the number of color channels of an original light field image respectively, the parameter information of the original light field image comprises the size length ImageH and the size width ImageW of the original light field image, the size length CapH and the size width CapW of a lens array of the original light field image and the number of color channels of the original light field image, and the number of the color channels of the original light field image is defaulted to be 3;

(2) polar surface image extraction: calculating the size length EpiH _ h and the size width EpiW _ h of the polar surface image in the horizontal direction, the size length EpiH _ v and the size width EpiW _ v of the polar surface image in the vertical direction according to the size length ImageH and the size width ImageW of the original light field image, the size length CapH and the size width CapW of the lens array of the original light field image,

EpiH_h=ImageH/CapH,

EpiW_h=ImageW,

EpiH_v=ImageH,

EpiW_v=ImageW/CapW,

Further obtaining a horizontal direction polar surface image Epi _ h and a vertical direction polar surface image Epi _ v,

Epi_h=ImageLF[1:CapH:ImageH,:,:],

Epi_v=ImageLF[:,1:CapW:ImageW,:]。

2. color difference calculation module

(1) Constructing a corresponding function of the depth label and the parallax value: calculating to obtain a parallax value corresponding to each depth label according to the parallax value range of the original light field image ImageLF and the set number of the depth labels, and uniformly distributing the depth labels to the parallax values corresponding to the parallax labels theta:

Dis_d=Dis_min+(Dis_max-Dis_min)/NumD*θ

the method comprises the steps that the depth labels are distributed in a distributed mode, wherein Dis _ min is the minimum value of a parallax value, Dis _ max is the maximum value of the parallax value, NumD is the number of the depth labels, the difference value of the parallax value between the adjacent depth labels is obtained through calculation of (Dis _ max-Dis _ min)/NumD, and the depth labels are uniformly distributed to the parallax values, so that the parallax value in a corresponding [ Dis _ min, Dis _ max ] interval under each depth label is obtained;

(2) constructing a color difference value convolution kernel on two sides of the straight line: for each depth label theta, extracting a parameter P according to the polar surface imageEpiAnd parameter information P of the original light field imageImageLFStraight lines L representing depth labels theta on the horizontal direction polar plane image Epi _ h and the vertical direction polar plane image Epi _ v using Gaussian functions, respectivelyθIs centered to LθTwo-side diffusion, namely building a linear two-side color difference value convolution kernel Filter under the depth label theta θDistribution FilterθInternal weight parameter of (2):

Figure BDA0002568236820000091

wherein, Wa(i, j) represents the weight of the pixel point (i, j) under the depth label theta, and the pixel point (i, j) is a straight line L from the horizontal directionθIs i pixels, from the vertical direction to the line LθThe distance of (a) is a pixel point of j pixels, c is a constant term, e is a natural logarithm, dθ(i, j) is a distance straight line L between the pixel point (i, j)θThe distance of (a) is calculated as (i 2+ j 2) ^ (1/2), a is adjustment FilterθSensitivity rangeA constant term of degree is 0.5 by default, wherein i is less than or equal to three times the width of the polar image, j is less than or equal to the width of the polar image, and a straight line LθThe parallax value corresponding to the depth label theta is used as a step length to determine, one step length is added from the first lens 0 parallax to move a lens to the right each time, and the obtained pixel points are connected to form a straight line Lθ. Filter using linear two-side color difference convolution kernel under depth label thetaθConvolution operations are respectively performed on the horizontal-direction pole face image Epi _ h and the vertical-direction pole face image Epi _ v, that is:

Diff_h0=Filterθ.*Epi_h,

Diff_vθ=Filterθ.*Epi_v,

during convolution operation, the starting point is a pixel point at the upper left corner of Epi _ h or Epi _ v, the step length of each convolution operation is 1 pixel, the pixel is moved rightwards in the horizontal direction by 1 pixel until the right edge of Epi _ h or Epi _ v is reached, and a Filter is utilized θRespectively carrying out quantity product multiplication and summation with pixel values on each color channel on the Epi _ h or the Epi _ v to obtain color difference results Diff _ h on two sides of a horizontal straight line of each pixel point p (i, j) on the Epi _ h under the depth label thetaθ(i, j) and the color difference result Diff _ v of two sides of the vertical direction straight line of each pixel point p (i, j) on Epi _ vθ(i, j). As shown in FIG. 2, where θ1、θ2、θ3Representing three depth labels, theta1For correct depth label, θ2And theta3A is a convolution kernel Filter for adjusting color difference on two sides of a straight lineθThe constant term of the sensitivity, 3a, is embodied in the graph as a color difference convolution kernel Filter on two sides of a straight lineθHalf of the width of (1), three parallelogram frames being linear two-side color difference convolution kernel filtersθCoverage area diagram, θ1The color difference convolution kernels on two sides of the straight line at the label position are convolution kernels under the correct depth theta2And theta3The label position represents convolution kernel under error depth, and pixels on the polar image are checked through color difference convolution kernels on two sides of straight line under different-depth labelsAnd performing convolution operation, and calculating color difference on two sides of the convolution kernel under different depth labels.

3. Color difference integrity optimization module

(1) Calculating the reliability of the color difference result at two sides of the straight line in the horizontal direction and the vertical direction:

aiming at the color difference result Diff _ h of each pixel point p (i, j) of the horizontal pole face image Epi _ h on two sides of the horizontal straight line under different depth labels thetaθ(i, j), extracting color difference results Diff _ h on two sides of a horizontal straight line corresponding to a central visual angle (CapH/2, CapW/2) lensθ(i, j), calculating the mean value DiffAvg _ h (i, j) and the maximum value DiffMax _ h (i, j) of the color difference results on the two sides of the horizontal direction straight line of each pixel point p (i, j) under different depth labels theta, and further calculating the reliability of the color difference results on the two sides of the horizontal direction straight line of the pixel point (i, j):

DiffCov_h(i,j)=DiffAvg_h(i,j)/DiffMax_h(i,j)

aiming at the color difference results Diff _ v of two sides of the vertical direction straight line of each pixel point p (i, j) of the vertical direction polar surface image Epi _ h under different depth labels thetaθ(i, j), extracting color difference results Diff _ v of two sides of a vertical direction straight line corresponding to a lens with a central visual angle (CapH/2, CapW/2)θ(i, j), calculating the mean value DiffAvg _ v (i, j) and the maximum value DiffMax _ v (i, j) of the color difference results on the two sides of the vertical direction straight line of each pixel point p (i, j) under different depth labels theta, and further calculating the reliability of the color difference results on the two sides of the vertical direction straight line of the pixel point (i, j):

DiffCov_v(i,j)=DiffAvg_v(i,j)/DiffMax_v(i,j)

(2) Overall optimization of linear two-side color difference results

For each pixel point p (i, j) under different depth labels theta, according to the credibility DiffCov _ h (i, j) of the color difference results on two sides of the straight line in the horizontal direction and the credibility DiffCov _ v (i, j) of the color difference results on two sides of the straight line in the vertical direction, the weight with high credibility is increased, the weight with low credibility is reduced, and for the credibility Diff _ h of the color difference results on two sides of the straight line in the horizontal directionθ(i, j) color difference from both sides of vertical straight lineResults Diff _ vθ(i, j) carrying out weighted summation integration optimization to obtain the optimized color difference result on two sides of the straight line:

Figure BDA0002568236820000111

4. depth estimation visual output module

(1) Screening maximization, aiming at each pixel point p (i, j), according to the optimized color difference results DiffOpt on two sides of the straight line under different depth labels thetaθ(i, j) to find DiffOptθThe Depth label corresponding to the maximum value of (i, j) is used as a Depth estimation label Depth (i, j);

(2) and (3) visual output of the Depth distribution map, namely projecting the Depth estimation label Depth into black and white pixel points with different brightness values Color (i, j) according to the Depth estimation label Depth and a corresponding function of the Depth label and a parallax value, corresponding the Depth estimation label of each pixel point to a corresponding brightness value, and representing the corresponding brightness value as single-channel binary image visual output of the Depth estimation label distribution, wherein the brightness value:

Color(i,j)=Depth(i,j)*NumD/255

The invention also provides a light field camera depth estimation method based on polar image difference maximization, which specifically comprises the following steps:

1) obtaining original light field image ImageLF and parameter information P of original light field image by using light field cameraImageLFAccording to PImageLFSetting extraction parameter P of polar imageEpiAccording to PEpiRespectively extracting a horizontal direction polar surface image Epi _ h and a vertical direction polar surface image Epi _ v;

2) setting the number NumD of the depth labels, constructing a corresponding function of the depth labels and the parallax values, and constructing a linear two-side color difference value convolution kernel Filter under the depth labels theta for each depth label thetaθAnd (5) utilizing a linear two-side color difference value convolution kernel Filter under the constructed depth label theta'θRespectively performing convolution operation on the horizontal direction polar surface image Epi _ h and the vertical direction polar surface image Epi _ v to sequentially obtain each depth label thetaDiff _ h of color difference result of two sides of horizontal straight line of each pixel pointθ(i, j) results of color differences Diff _ v from both sides of a vertical lineθ(i,j);

3) Respectively calculating the horizontal direction credibility DiffCov _ h of each pixel point on the horizontal direction polar face image Epi _ h and the vertical direction credibility DiffCov _ v of each pixel point on the vertical direction polar face image Epi _ v, and comparing Diff _ h θ(i, j) and Diff _ vθ(i, j) carrying out weighted summation integration optimization to obtain the optimized color difference results DiffOpt on two sides of the straight line of each pixel point under the depth label thetaθ(i,j);

4) Aiming at each pixel point p (i, j), calculating DiffOpt under different depth labels thetaθAnd (3) taking the Depth label corresponding to the maximum value of the (i, j) as a Depth estimation label Depth (i, j), projecting the Depth estimation label Depth into black and white pixel points with different brightness values C0l0r (i, j) according to the Depth estimation label Depth and a corresponding function of the Depth label and the parallax value, and representing the black and white pixel points as single-channel binary image visualization output of the Depth estimation label distribution.

In order to compare and evaluate the actual depth estimation performance of the method, a bilateral consistency method is used as a baseline method to carry out actual experiments and show the experimental results, wherein the bilateral consistency algorithm is a depth estimation method based on a stereo matching algorithm and is mainly characterized in that the same scene points in a plurality of sub-aperture images are matched, and the depth estimation is carried out by utilizing the imaging consistency of the light field images. Using two light field images as an example, as shown in fig. 3 (a), the central view angle images of the two light field images, fig. 3 (b) show the correct depths of the two light field images, fig. 3 (c) shows the depth estimation results obtained by the bilateral consistency method, for the edge region with the shielding situation, the bilateral consistency algorithm has a large number of pixel points with wrong depth estimation, compared with the bilateral consistency method, the depth estimation results obtained by the method shown in fig. 3 (d) have clearer division between regions with similar colors, the depth estimation of the edge region with the shielding situation is more accurate, and is more matched with the depth distribution of the real situation. It can be seen that the accuracy of the algorithm used in the invention is higher, and the depth estimation result is more matched with the real depth distribution in the area with similar color.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种从图像中提取平面的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!