Light field camera four-dimensional data large depth-of-field three-dimensional display method

文档序号:1601634 发布日期:2020-01-07 浏览:29次 中文

阅读说明:本技术 一种光场相机四维数据大景深三维显示的方法 (Light field camera four-dimensional data large depth-of-field three-dimensional display method ) 是由 艾灵玉 石肖 周彪 于 2019-09-25 设计创作,主要内容包括:本发明公开了一种光场相机四维数据大景深三维显示的方法,属于图像的三维成像领域。本发明方法首先获取包含三维物体信息的四维数据,通过放大和旋转使子图与像素点网格对齐,得到标准六边形排列光场图像;再转变为元素图像EIA以匹配DPII显示系统,对光场图像按照三维物体的深度分割场景,对处于不同平面的三维物体通过深度调整算法进行深度调整,将调整后各物体的光场图像融合为一张并对其在DPII系统上进行三维显示。本发明方法获得具有前后物体极大视差,大景深的裸眼三维显示效果,解决了关于光场相机采集过程和显示过程中出现中景深缩小、视差不明显的问题。(The invention discloses a method for three-dimensional display of four-dimensional data of a light field camera in large depth of field, belonging to the field of three-dimensional imaging of images. The method comprises the steps of firstly, acquiring four-dimensional data containing three-dimensional object information, aligning sub-images with pixel point grids through amplification and rotation, and obtaining a standard hexagonally arranged light field image; and then converting the light field images into element images EIA to match a DPII display system, segmenting scenes of the light field images according to the depth of three-dimensional objects, performing depth adjustment on the three-dimensional objects in different planes through a depth adjustment algorithm, fusing the light field images of all the adjusted objects into one image, and performing three-dimensional display on the DPII system. The method of the invention obtains the naked eye three-dimensional display effect with great parallax and great depth of field of the front and rear objects, and solves the problems of reduced depth of field and unobvious parallax in the acquisition process and the display process of the light field camera.)

1. A light field camera four-dimensional data large depth of field three-dimensional display method is characterized by comprising the following steps:

(1) and (3) decoding: obtaining a light field image L of a standard hexagonal arrangements(u, v, x, y) into a square-arranged elemental image EIA;

(2) depth segmentation mapping: converting the element image EIA into an orthogonal projection image, and segmenting objects at different depths in the image to obtain the EIAobj1,EIAobj2,…,EIAobjnBy depth mappingAdjusting the distance between the microlens array recording surface and the main lens surface to change the distance between the image in the intermediate image space and the microlens array surface, wherein the adjusted target distance is D ', the original distance is D, and D'/D is alpha, according to the formula

Figure FDA0002214385760000011

(3) Acquiring large depth-of-field information: the EIA sets of the objects with different depths obtained in the step (2) are fused into one pixel image EIA by using a depth position weight algorithmoptThen, displaying in a DPII system, namely realizing three-dimensional display with large depth of field; the depth position weighting algorithm comprises: sequencing the EIA image set obtained in the step (2) according to the depth position of the object, sequentially stacking the objects closest to the camera, wherein for a single pixel point, when the value of the current position after the current EIA is fused is the object, the weight of the pixel point at the position is 0, when the value of the current position after the current EIA is fused is the background, the weight of the pixel point at the position is 1, and finally fusing all the pixel points of the EIA at the position according to the following formula: EIAPopt=w1·EIAPopt1+w2·EIAPopt2+…+wn·EIAPoptnWherein EIA P is a pixel point in EIA, and w is the weight of the pixel point; weighting all the pixel points to obtain the final large-depth-of-field element image EIAopt

2. The method of claim 1, wherein the step (1) of segmenting objects at different depths in the image comprises the steps of:

(1) mapping elemental images EIA into orthographic projection images O using an orthographic projection transformation algorithmt

(2) Is aligned with the cross-projection image OtEach sub-image is subjected to image segmentation to obtain an orthogonal projection set Oobj1,Oobj2,…,OobjnWherein n corresponds to different objects, and each sub-graph obtained by segmentation comprises three-dimensional objects with the same depth and has the same size as the atomic graph;

(3) the segmented orthogonal projection set Oobj1,Oobj2,…,OobjnConverting into EIA set EIA by using orthogonal projection conversion algorithmobj1,EIAobj2,…,EIAobjn

(4) Remapping the EIA obtained in the step (3) by using a depth mapping algorithm, and collecting the EIAobj1,EIAobj2,…,EIAobjnIn the above EIA, the distance between the image in the intermediate image space and the microlens array plane is changed by adjusting the distance between the microlens array recording plane and the main lens plane by depth mapping, the adjusted target distance is D ', the original distance is D, and the ratio D'/D ═ α between L and LD′To LDThe mapping relation is satisfied:

Figure FDA0002214385760000021

3. The method of claim 2, wherein α is calculated from the distance of each object to the light field camera and the parameters of the light field camera itself:where F is the focal length of the main lens, DzIs the distance of an object in real space to the light field camera.

4. The method according to any of claims 1-3, wherein the light field image LsThe (u, v, x, y) acquisition method is to create a white image dataset using the white images stored by the light field camera itself, and then match the white image dataset with the captured light field image LoWhite image assisted decoding of same focus information and focus informationObtaining a light field image L of a standard hexagonal arrangements(u,v,x,y)。

5. The method according to claim 1, characterized in that the method for obtaining a light field image of light field images L (u, v, x, y) in a standard hexagonal arrangement comprises in particular the steps of:

(1) removing all black pixel points which do not acquire information in the white image, and carrying out min-max standardization processing on the retained data;

(2) in the image subjected to standardization processing, the brightest pixel point, namely the pixel point with the value of 1, is found in a traversing manner and is marked as the central point of a light field image sub-image;

(3) calculating the number X of the subgraph in the horizontal and vertical directions according to the central point model obtained in the step (2)max、YmaxCalculating the side length of the circumscribed rectangle of each sub-graphPerforming first-order linear fitting on the rows and the columns of the central point array to obtain a fitting slope kx1,kx2,…,kxn,ky1,ky2,…,kyn

(4) According to x in step (3)spaceAnd yspaceCalculating the amplification factor required for amplifying the subgraph to the integral multiple of the pixel points, and obtaining the integral rotation to obtain the slope according to the slope of the horizontal and vertical first-order fitting straight line calculated in the step (2)

Figure FDA0002214385760000024

(5) Amplifying and rotating the light field image L according to the amplification factor and the rotation slope obtained in the step (4), so that the centers of the sub-images are aligned with the centers of the pixel points, and the light field image L in standard hexagonal arrangement is obtaineds

6. The method of claim 1 wherein the light field image L is arranged as a standard hexagonsThe method for converting the element image EIA into the square arrangement comprises the following steps:

(1) in the standard light field image LsIn the method, the region range of the subgraph on the light field image is determined according to the central point and the radius of the subgraph, and the light field image L is arranged according to the subgraph as a unit and an orthogonal grids

(2) Resampling the image obtained in the step (1), and performing interpolation in the horizontal direction;

(3) and taking the middle maximum inscribed square region from the obtained circular subgraph arranged in the orthogonal grid to obtain the element image EIA arranged in a square.

7. The method according to claim 1, wherein step (3) comprises in particular the steps of:

(1) creating a blank image EIA of the same size as the EIA in the EIA setopt

(2) For a certain position in the image, the value of the pixel point is:

EIAPopt=w1·EIAPopt1+w2·EIAPopt2+…+wn·EIAPoptn

wherein EIA P is the pixel point value of the position, w is the weight of the corresponding multiplication term, and EIA is the set EIAopt1,EIAopt2,…,EIAoptnThe weight of the pixel point of the EIA in the position follows the following principle: when the value of the current EIA after fusion is an object, the weight of the pixel point at the position is 0, and when the value of the current EIA after fusion is a background, the weight of the pixel point at the position is 1;

(3) traverse EIAoptAll the pixel points in the image are used for calculating the final EIAopt

8. Use of the method of any one of claims 1 to 7 in the field of light field imaging.

9. A large depth of field three-dimensional display system, the system comprising: the display device comprises a liquid crystal display combined square micro-lens array; the data processing device obtains a large depth information collection by decoding, depth segmentation and mapping in the method of claim 1, and finally obtains a three-dimensional image by depth weight combination.

10. A camera device comprising the display system of claim 9.

Technical Field

The invention relates to a method for three-dimensional display of four-dimensional data of a light field camera in large depth of field, belonging to the field of three-dimensional imaging of images.

Background

In recent years, three-dimensional imaging technology has been a research hotspot of optical imaging, wherein the light field imaging technology is regarded as an effective three-dimensional imaging solution because the light field imaging technology can record information of three-dimensional scenes through a single exposure. A light field camera object based on light field imaging inserts a microlens array between the main lens and the sensor, compared to a conventional camera, and the light field L (u, v, x, y) is represented by the main lens plane and the microlens plane, where (u, v) represents the direction of the light ray and (x, y) represents the position of the light ray. The original light field image L (u, v, x, y) recorded by the camera can realize digital refocusing before and after the recording surface through the direction and the position relation of the light rays. Light field imaging has been applied to depth extraction, three-dimensional mapping, motion detection, and the like.

Although the light field camera records three-dimensional information, the main display means of the light field four-dimensional data is two-dimensional refocusing, which is equivalent to that a common camera collects data for multiple times in the same depth direction, and three-dimensional information is not directly displayed. However, the two-dimensional refocusing display result has the problem of reduced relative depth and parallax, and the naked eye three-dimensional display effect cannot be obtained, so that the use of the light field camera is limited.

Disclosure of Invention

In order to solve the problems and eliminate the influence of reduction of relative depth and parallax, the invention constructs a method for three-dimensional display of four-dimensional data of a light field camera with large depth of field. The method comprises the steps of performing large-depth-of-field three-dimensional display on four-dimensional data acquired by a light field camera, acquiring light field information of an object by using a mobile handheld light field camera, converting the light field information into an EIA (Element Image Array) matched with a DPII (depth Image Imaging depth-first Integral Imaging) display system, and performing three-dimensional display by using a square micro-lens Array.

The light field camera, due to its camera imaging characteristics, records three-dimensional information of a scene that is reduced in depth by a factor that increases with increasing object distance, and also needs depth adjustment when displaying the information recorded by the light field camera in three dimensions.

The invention provides a method for large-depth-of-field three-dimensional display by using light field camera four-dimensional data, which comprises the following steps:

s1, establishing a white image data set by using the white image stored by the light field camera;

s2, collecting a light field image LoMatching and decoding the same focusing information (zoomostep) and focusing information (focusstep) parameters as the collected light field image in the white image dataset created at S1, obtaining a light field image L of a standard hexagonal arrangements

S3, converting the light field image LsConverting into orthogonal grid arrangement to obtain square arranged element image array EIAo

S4, for Square arranged EIAoDividing objects with different depths to obtain EIA set EIA recording different objectsobj1,EIAobj2,…,EIAobjnThen, the EIA obtained in the step S43 is remapped by using a depth mapping algorithm to obtain a light field image set EIA matched with the depth of the objectopt1,EIAopt2,…,EIAoptn(ii) a The depth mapping algorithm comprises: in a light field camera, the distance between the microlens surface and the main lens surface is D, and the recorded light field image is the light field information L at the distance D from the main lens plane in the intermediate image spaceD(ii) a If the ray continues to travel backward to D ', let α be D '/D, the ray L (u, v, x, y) at depth D ' can be represented by the recorded ray:

the distance between an image point in the light field camera and the plane of the micro lens is changed by changing the position of the plane of the micro lens, objects with different depths are recorded, and a light field image set matched with the depth of the object is obtained, wherein the value of alpha is changed along with the different depths of the object, and can be specifically solved by the following formula:

where F is the focal length of the main lens, DzIs an objectThe distance between the body and the main lens of the light field camera is a known value. And (4) calculating the alpha value corresponding to the depth of each object according to the formula, and performing depth mapping on the alpha value.

S5, using the depth position weight algorithm to collect the light field image set EIA obtained in the S4opt1,EIAopt2,…,EIAoptnFuse into an element image EIAoptAnd displayed in a DPII system; the depth position weight algorithm is as follows: sequencing the EIA image set obtained in the step (4) according to the depth position of the object, sequentially stacking the objects closest to the camera, wherein for a single pixel point, when the value of the current position after the current EIA is fused is the object, the weight of the pixel point at the position is 0, when the value of the current position after the current EIA is fused is the background, the weight of the pixel point at the position is 1, and finally fusing all the pixel points of the EIA at the position according to the following formula:

EIAPopt=w1·EIAPopt1+w2·EIAPopt2+…+wn·EIAPoptn

wherein EIAP is a pixel point in EIA, and w is the weight of the pixel point used in pair. Traversing all pixel points to obtain EIA in the above wayopt

In an embodiment of the present invention, a method for displaying light field camera four-dimensional data in a three-dimensional manner with a large depth of field specifically includes the following steps:

s1: establishing a white image data set by utilizing a white image stored by a light field camera;

s2: matching and collecting light field image L in the white image dataset created at S1oLODecoding the same zoomstep (zoom information) and focusstep (focus information) to obtain a light field image L with a standard hexagonal arrangements

S3: converting the light field image in the hexagonal arrangement obtained in the S3 into an orthogonal grid arrangement to match a DPII system;

s4: segmenting objects with different depths in the image, solving alpha according to the actual depth of the objects and remapping to obtain an EIA set EIA with large depth of fieldopt1,EIAopt2,…,EIAoptnWhere 1, …, n corresponds to different objects;

s5: the EIA sets of the objects with different depths obtained in the step S4 are fused into an EIA by using a depth position weight algorithmoptAnd finally displayed in the DPII system.

In one embodiment of the present invention, the depth mapping algorithm is that in a light field camera, the distance between the microlens surface and the main lens surface is D, as shown in fig. 2, and the recorded light field image is the light field information at the distance D from the main lens plane in the intermediate image space. If the ray continues to travel backward to D ', let α be D '/D, the ray L (u, v, x, y) at depth D ' can be represented by the recorded ray:

changing the distance between an image point in the light field camera and the plane of the micro lens by changing the position of the plane of the micro lens, recording objects with different depths, and obtaining a light field image set matched with the depth of the object, wherein the value of alpha is changed along with the different depths of the object, and can be specifically solved by the following formula;

Figure BDA0002214385770000032

and (4) calculating the alpha value corresponding to the depth of each object according to the formula, and performing depth mapping on the alpha value.

In an embodiment of the present invention, the white image data set in S1 is acquired by: a white image data set stored by the light field camera itself is used.

In one embodiment of the present invention, the light field image L of the standard hexagonal arrangement is obtained in step S2SThe method comprises the following steps:

s2-1, removing all pixel points in the white image which are lower than the black threshold value, wherein the pixel points do not acquire light information, and performing min-max standardization processing on the retained data;

s2-2, traversing and finding the brightest pixel point with the value of 1 in the image subjected to standardization processing, and marking the pixel point as the central point of the light field image subgraph;

s2-3, calculating the number X of the subgraph in the horizontal and vertical directions according to the central point model obtained in S1-2max、YmaxCalculating the side length of the circumscribed rectangle of each sub-graph

Figure BDA0002214385770000041

Performing first-order linear fitting on the rows and the columns of the central point array to obtain a fitting slope kx1,kx2,…,kxn,ky1,ky2,…,ky3

S2-4, according to x in S2-3spaceAnd yspaceCalculating the amplification factor required for amplifying the sub-image to the integral multiple of the pixel point, and obtaining the optimal slope by the integral rotation according to the horizontal and vertical first-order fitting straight line slopes calculated in S12

Figure BDA0002214385770000042

S2-5, applying the magnification factor and the rotation slope obtained in S2-4 to the initial light field image LOAmplifying and rotating to align the centers of the sub-images with the centers of the pixel points to obtain a light field image L in standard hexagonal arrangements

In one embodiment of the present invention, the specific step of converting the light field image of the hexagonal arrangement into the orthogonal grid arrangement described in step S3 includes:

s3-1, aligning the standard light field image LsIn the method, the region range of the subgraph on the light field image is determined according to the central point and the radius of the subgraph, and the light field image L is arranged according to the subgraph as a unit and an orthogonal grids

S3-2, resampling the image obtained in the S3-1, and interpolating in the horizontal direction;

s3-3, taking the middle maximum inscribed square region from the obtained circular subgraph in the orthogonal grid arrangement to obtain the EIA in the square arrangemento

In an embodiment of the present invention, step S4 specifically includes:

s4-1, using orthogonal projection transformation algorithm to transform the EIAoRemapped to orthographic projection image Ot

S4-2, for the orthogonally projected image OtEach sub-image is subjected to image segmentation according to the contained object to obtain an orthogonal projection set Oobj1,Oobj2,…,OobjmWherein n corresponds to different objects, and each sub-graph obtained by segmentation comprises three-dimensional objects with the same depth and has the same size as the atomic graph;

s4-3, dividing the orthogonal projection set Oobj1,Oobj2,…,OobjmConverting into EIA set EIA by using orthogonal projection conversion algorithmobj1,EIAobj2,…,EIAobjn

S4-4, remapping the EIA obtained in the step S43 by using a depth mapping algorithm, wherein the depth of the three-dimensional scene collected by the light field camera is reduced in an intermediate image space after the three-dimensional scene passes through the main lens, and the reduction times are increased along with the increase of the actual depth of the object; mapping different depths of each converted EIA by using a depth mapping algorithm to obtain a light field image set EIA matched with the depth of the objectopt1,EIAopt2,…,EIAoptn

In an embodiment of the invention, the orthogonal projection conversion algorithm is to take out all pixel points at the same position in the EI and place the pixel points in the same orthogonal projection sub-image area to obtain an orthogonal projection image. For example, in an EIA having M × M EI and N × N pixels in each EI, the pixel points with the coordinate position of (1,1) in all the EI are taken out to form a new orthogonal projection sub-image, the size of which is M × M pixels, and the new orthogonal projection sub-image is placed at the coordinate (1,1) of the orthogonal projection array, and the same operation is performed on all the pixel points in the EI to obtain an orthogonal projection image with the sub-image number of N × N and M × M pixels in each sub-image.

In one embodiment of the present invention, in step S4-4, in order to adjust the non-uniform scaling of the three-dimensional scene in depth, the distance from the image at a certain depth in the intermediate image space to the microlens plane is changed by mapping to light fields at different distances, the mapping distance is D', and the ratio α between the mapping distance and the distance D between the original main lens and the microlens can be solved by the distance from the object to be adjusted to the camera and the parameters of the camera itself, and all the acquired objects are adjusted by using the above method.

The application of the display method in the field of three-dimensional imaging is as follows: the traditional method for collecting integrated imaging by using a micro-lens array can only obtain three-dimensional information on one side of a micro-lens, and a light field camera can be used for recording information on the front and back surfaces of a focal plane of the camera.

It is a second object of the present invention to provide a large depth of field three-dimensional display system, said system comprising: the display device comprises a liquid crystal display combined square micro-lens array; the data processing apparatus includes: and acquiring a light field image, then obtaining a large depth information collection by utilizing the decoding, the depth segmentation and the mapping, and finally obtaining a three-dimensional image by depth weight combination.

A third object of the present invention is to provide a camera apparatus mounting the above display system.

The invention has the beneficial effects that:

the invention can carry out three-dimensional display on the four-dimensional light field data acquired by the light field camera, and effectively solves the problems of uneven depth compression and small display parallax caused by uneven zooming of the main lens on the depth when the light field camera records three-dimensional object information. By the depth adjustment, a large display parallax is provided.

Drawings

FIG. 1 is a flow chart of light field camera data acquisition and three-dimensional display;

FIG. 2 is a schematic view of a light field camera depth adjustment;

fig. 3(a) is an originally acquired light field image and a partial enlarged view thereof, fig. 3(b) is an EIA converted from the light field image without depth adjustment, and fig. 3(c) is an EIA subjected to depth adjustment;

fig. 4 is an experimental diagram of depth adjustment performed by the orthogonal projection algorithm, where fig. 4(a) and (d) are orthogonal projection images, fig. 4(b) and (e) are orthogonal projection sub-images, the left sides of fig. 4(c) and (f) are orthogonal projection images containing a single object, the middle part is an EIA converted from the left-side image, and the right side is an EIA containing a single object obtained after depth adjustment of the middle EIA;

fig. 5 is a comparison diagram of multi-view three-dimensional display effect, wherein fig. 5(a) and (d) are diagrams of the effect of displaying the original light field image under the microlens array. Fig. 5(b) and (e) are graphs showing the effect of the EIA without depth adjustment under the microlens array. Fig. 5(c) and (f) are graphs showing the effect of the EIA after the depth adjustment on the microlens array.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings of the embodiments of the present invention. It should be understood that the following examples are illustrative of the present invention and are not intended to limit the scope of the present invention.

In the embodiment of the invention, a lytro-generation light field camera is used for data acquisition, the final result is displayed by using a tablet personal computer and a micro-lens array, and the parameters of an acquisition system are shown in table 1:

TABLE 1 parameters of the acquisition System

Figure BDA0002214385770000061

After the light field image is collected according to the position parameters shown in table 1, the light field image is converted into EIA to be displayed in a display system, and the hardware parameter specification of the display system is shown in table 2:

table 2 shows the hardware parameter specification of the system

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于显示图像的方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类